Fansher
etal.
Cognitive Research: Principles and Implications (2025) 10:7
https://doi.org/10.1186/s41235-025-00613-w
ORIGINAL ARTICLE
Open Access
© The Author(s) 2025. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which
permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the
original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or
other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line
to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory
regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this
licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
Cognitive Research: Principles
and Implications
Narrative visualizations: Depicting
accumulating risks andincreasing trust indata
Madison Fansher
6*
, Logan Walls
1
, Chenxu Hao
2
, Hari Subramonyam
3
, Aysecan Boduroglu
4
, Priti Shah
1
and
Jessica K. Witt
5
Abstract
In contexts where people lack prior knowledge and risk awareness—such as the COVID-19 pandemic—even truth-
ful visualizations of data can seem surprising. This can lead people to mistrust the veracity of the data and to dis-
count it, leading to poor risk decisions. In this work, we illustrate how narrative visualizations can achieve a balance
between the benefits of three common risk communication mediums (static visualizations, interactive simulations,
and affect-laden anecdotes). We demonstrate empirically that viewing a narrative visualization mitigates the reduced
concern induced by a static visualization when communicating COVID-19 transmission risk (Study 1). Through media-
tion analysis, we show that narrative visualizations are more effective than static visualizations at increasing concern
about large risks because they increase ones perceived understanding and trust in data (Study 2). We argue that nar-
rative visualizations deserve attention as a distinct class of visualizations that have the potential to be powerful tools
for scientific communication (especially in contexts where data are surprising, and empiricism is important).
Keywords Data visualization, Misinformation, Risk perception
Introduction
During our lifetimes we repeatedly expose ourselves to
extremely small risks. For example, the risk of experienc
-
ing skin cancer increases with exposure. to UV radiation.
To date, there has been a dearth of research on how to
best communicate small but accumulating risks, such
as the risks associated with the continuous exposure to
small amounts of foreign/dangerous substances (e.g.,
heavy metal poisoning) or the repetition of risky behav
-
iors (e.g., driving without a seat belt; Slovic,et al., 1978).
Prior research suggests that cumulative risks tend to be
underestimated on average (Doyle, 1997; Slovic, 2000),
though this average may be based on a significant num
-
ber of individuals who severely underestimate risk and
another who slightly overestimate risk (De La Maza
etal., 2019). ese types of risks are harder to communi
-
cate not only because humans are typically bad at accu-
mulation-based judgments, but also because the overall
accumulated risk is perceived to be somewhat implausi
-
ble, leading to amistrust of data. us, communicative
efforts need to focus not only on identifying the most
appropriate format to depict the accumulation pattern,
also onfinding ways to increase people’s trust in the end
result.
People commonly use charts or other static visu
-
alizations to communicate information about complex
systems like climate change and global pandemics (Fran
-
coneri etal., 2021). For instance, a visualization designer
might use an icon array that illustrates a rise in tempera
-
ture over time to highlight the effect of human activity
on climate. Such visualizations effectively convey spe
-
cific data or outcomes (e.g., that global temperatures
*Correspondence:
Madison Fansher
mfansher@umich.edu
1
Department of Psychology, University of Michigan, Ann Arbor, USA
2
Department of Intelligent Systems, Delft University of Technology, Delft,
The Netherlands
3
Graduate School of Education, Stanford University, Stanford, USA
4
Department of Psychology, Koç University, Istanbul, Turkey
5
Colorado State University, Fort Collins, USA
6
Department of Physical Medicine & Rehabilitation, University
of Michigan, Ann Arbor, USA
Page 2 of 27
Fansheretal. Cognitive Research: Principles and Implications (2025) 10:7
have risen alongside carbon emissions), but they do not
illustrate the underlying mechanisms that generate them
(e.g., the greenhouse effect). As a result, viewers may not
understand the visualizations intended message (Newell
etal., 2016) and may even discredit the data if it seems
surprising or inconsistent with their prior beliefs (Lord
etal., 1979; Rhodes etal., 2014; Shah etal., 2017). From
an individual’s perspective, mistrust in data could be
related to various personal factors such as one’s political
identity (Peck etal., 2019) and one’s lack of understand
-
ing of how the data arise—especially in data showing
complex systems such as the exponential growth of dis
-
ease prevalence (Fansher etal., 2022a; Witt etal., 2022)
or visualizations illustrating uncertainty (Padilla et al.,
2022a). Furthermore, one’s misunderstanding of the pur
-
pose of science models (e.g., believing that these models
should make accurate predictions or depict reality) may
also lead to distrust in data (Witt etal., 2022). In the cur
-
rent study, we examined methods for communicating
cumulative risk: namely with narrative visualizations,
static visualizations, and anecdotes. We also examined
whether one’s trust and understanding of the data medi
-
ated the relationship between the presence of a visualiza-
tion and change in behavior or risk understanding.
The role ofanecdotes
One approach for risk communicators is to avoid dis-
cussing the data altogether, and instead give an anec-
dote about an individual that was negatively affected by
the risk. is approach is often used to increase concern
about low-risk events. In fact, prior research has shown
that anecdotes may be more persuasive than providing
statistical data in health-related contexts, especially when
personal decisions are involved or when threats are seri
-
ous (for a review/meta-analysis, see Freling etal., 2020).
Anecdotes are thought to be effective because they are
concrete and easier to understand than statistical evi
-
dence, are emotionally interesting, and, possibly, more
memorable (Freling etal., 2020). For example, success
-
ful texting and driving interventions often rely on stories
about texting and driving accidents rather than present
-
ing data about accident risk (Cutello etal., 2020). In gen-
eral, stories that tug at one’s emotions, increase fear, and
provide visual imagery tend to be persuasive (Cutello
etal., 2020).
Unfortunately, communicating risks solely with anec
-
dotes is potentially problematic and may hinder criti-
cal thinking. For example, Rodriguez etal. (2016) found
that people were less likely to notice flawed conclusions
when anecdotes were presented in descriptions of sci
-
ence studies in the media. Furthermore, people often
become inured to anecdotes after multiple exposures
(Slovic etal., 2017). Importantly, anecdotes are also not
effective in all situations (Zebregs etal., 2015). A meta-
analysis by Zebregs etal. (2015) found that statistical data
were generally more influential on attitudes and beliefs
than anecdotes, while anecdotes were more effective at
changing behavioral intentions. For example, anecdotes
were less likely to alter readers’ attitudes toward climate
change, seat belt use, or exercise than statistical data. In
contrast, anecdotes had a larger influence than statistical
data on behavioral intentions with respect to risky sexual
behavior, exercising, and tanning bed use. A more recent
and somewhat broader meta-analysis suggests that anec
-
dotes can also be more persuasive than statistical data
when the risk consequences are high, and when they are
related to personal decisions rather than decisions about
others. Interestingly, one study found that individuals
reported preferringstatistical data to anecdotes for mak
-
ing decisions (Freling etal., 2020). us, although anec-
dotes may be able to increase risk concerns (at least when
risks are severe and personal), they are not a panacea for
communicating risks to the public. ese drawbacks call
for a method to convey data effectively without relying
on anecdotes. One possibility is the use of data visualiza
-
tions depicting risk.
Visualizing risks
Static visualizations
One approach to help individuals understand risk and
frequency data, especially individuals with low numeracy,
is to present icon arrays or similar visualizations (Galesic
& Garcia-Retamero, 2011). Often, pairs of icon arrays are
used to present relative and absolute risks. Icon arrays
are especially helpful for communicating just how small
risks might be. For example, a recent study showed that
presenting a very small risk (i.e., the risk of serious side
effects from the COVID-19 vaccine) as an icon array
(one in 1 million dots) was effective in communicating
the rarity of a ~ 0.000001% risk (Fansher etal., 2022b).
is representation led to much lower concerns about
COVID-19 vaccines and increased positive attitudes
toward vaccination.
While prior work demonstrates that visualizations like
icon arrays are effective at communicating risks, icon
arrays are still static visualizations, meaning that they
provide no mechanistic explanation for how the larger-
than-expected (or smaller-than-expected) risk calcu
-
lations are derived—they merely present the final risk
estimate. erefore, they may not be as compelling to
the viewer, and may be less likely to increase trust in data
when compared to visualizations that include informa
-
tion about how the risk estimate was derived.
Page 3 of 27
Fansheretal. Cognitive Research: Principles and Implications (2025) 10:7
Interactive simulations
One way to provide people with a richer understanding
of data is to provide them with interactive simulations
that demonstrate how data change under different sce
-
narios. Interactive simulations change how people view
data, especially data about different potential future out
-
comes. Rather than thinking of outcomes as fixed, they
recognize that science models are tools for examining
the consequences of different changes (Herring et al.,
2017; Sterman, 2011). For example, Herring etal. (2017)
gave participants simulations that showed how different
emissions scenarios led to different magnitudes of cli
-
mate change. ey found that using those simulations
increased people’s concerns about climate change. Simi
-
larly, Witt etal. (2022) showed that interacting with sim-
ulations explaining exponential growth of disease spread
as well as the impact of various public health policies
increased people’s understanding and trust in social dis
-
tancing recommendations.
Although interactive simulations are often effective,
they have several potential limitations. First, interac
-
tive simulations require more time and resources to cre-
ate than static simulations. Second, individuals must be
motivated to interact with simulations and manipulate
relevant variables. If too little or too much guidance is
provided for exploring a simulation, individuals may
simply not engage (Adams etal., 2015). ird, individu
-
als can engage in random or mindless interactions rather
than systematically exploring the effects of different vari
-
ables (Liew etal., 2014) and such visualizations may con-
tain elements that are distracting (Hegarty, 2004; Mayer,
2005). Finally, simulations require individuals to under
-
stand the mapping between different visualizations,
which is often challenging (Magana & Silva Coutinho,
2017).
Narrative visualizations
e two types of visualizations mentioned above fall
at the extreme ends of a spectrum. Static visualizations
communicate efficiently but offer little mechanistic
explanation, while simulations offer detailed mechanis
-
tic information but require more of the viewers time and
the designers careful guidance. In contrast, narrative vis
-
ualizations (Segel & Heer, 2010) strike a balance between
these two extremes. ese visualizations use step-by-step
explanations to convey details (like simulations) but con
-
strain the viewers inquiry to a predefined set of insights
(like static visualizations). Visualizations that meet this
definition already appear in popular publications such
as the New York Times (Buckley etal., 2022; Byrd etal.,
2022) and Reuters (Cage, 2021; Dutta etal., 2019; Levine
etal., 2021), but to our knowledge they have not been
identified as a specific group of visualizations or studied
specifically in the context of risk communication. Okan
etal. (2015) examined different factors that may improve
the efficacy of icon arrays for communicating risk. ey
found evidence that showing data in the form of icon
arrays alongside explanatory labels increased risk under
-
standing, particularly in participants with low graph lit-
eracy, suggesting that narrative visualizations may be an
effective tool for presenting risk information.
Narrative visualizations have been widely used by visu
-
alization designers to ensure readers attend to key data
related facts (Bach etal., 2018; Hullman & Diakopoulos,
2011; Hullman etal., 2013; Segel & Heer, 2010). Typically,
these visualizations incorporate external guidance such
as sequential panels (e.g., the martini glass approach)
and annotations to draw readers’ attention through the
data in the appropriate order. In these approaches, the
goal is to control how readers comprehend the visuali
-
zation by ensuring coverage of key facts and associated
narrative context, but not necessarily to increase under
-
standing about the data-generating process (i.e., how
the facts occurred). For instance, Lee etal. (2021) high
-
lighted the ways in which coronavirus skeptics effectively
used counter-visualizations to promote their anti-mask
stances. Reinholtz etal. (2021) showed that participants
presented with inflow visualizations (new coronavirus
cases each day) judged coronavirus risk as lower com
-
pared to participants presented with stock visualizations
(cumulative number of coronavirus cases). e subset
of narrative visualizations that focus on explaining the
data-generating process might be especially appealing
for complex and empirically driven communication chal
-
lenges like cumulative risk.
Trust anddata visualization
e literature on trust in data visualization has prolifer-
ated in recent years, likely due to a need for improved
scientific communication in an era where misinformation
threatens public health (Cook et al., 2015; Roozenbeek
etal., 2020) and people are increasingly skeptical toward
science (Rutjens etal., 2018). While data visualizations
have been shown to be effective at communicating risk
information (Lipkus & Hollands, 1999), especially in the
context of health risks (Garcia-Retamero et al., 2012),
researchers suggest that trust in the presented data is
necessaryin order to update their beliefs and behaviors
accordingly (Garcia-Retamero & Cokely, 2017; Mayr
etal., 2019). Researchers have examined how the features
of visualizations influence trust in various contexts, such
as viewing election forecast visualizations (Yang et al.,
2023), and the role of visualization features such as pro
-
cessing fluency (Elhamdadi etal., 2022a), title alignment
(Kong etal., 2019), and the communication of uncertainty
Page 4 of 27
Fansheretal. Cognitive Research: Principles and Implications (2025) 10:7
(Gustafson & Rice, 2020; Hullman, 2019; Kerr etal., 2023;
van der Bles etal. (2020); Yang etal., 2023).
ere are many challenges with measuring and defin
-
ing trust in the context of data visualization (Elhamdadi
etal., 2022b, 2023). In the current study, we adopt the
general definition of trust in visualization from Mayr
etal, (2019, p. 25), in that “trust is the users implicit or
explicit tendency to rely on a visualization and to build
on the information displayed.” Since the time of this
study, other authors have presented multidimensional
frameworks describing the various facets of trust in visu
-
alization (e.g., Elhamdadi etal., 2023; Pandey etal., 2023).
For example, Pandey etal. (2023) described trust in visu
-
alization as being based on data credibility, clarity, relia-
bility, familiarity, and confidence. Trust in visualization is
also commonly referred to as a type of interpersonal trust
between the trusted (visualization) and trustee (user)
(Kelton et al., 2008; Lewicki et al., 2006; Zhao, 2017).
Wang etal. (2014) described trustworthiness as referring
to the honesty and believability of the source, with peo
-
ple being more likely to adopt suggestions from trustwor-
thy sources and be influenced toward positive attitude
change.
Indeed, in Garcia-Retamero and Cokelys (2017) review
on the effect of data visualization on risk literature, the
authors state that “Well-designed visual aids robustly
improve risk understanding by encouraging more thor
-
ough deliberation, facilitating self-assessment, and
reducing biased risk representations, which in turn ben
-
efit attitudes, behavioral intentions, and trust, leading to
healthier decisions and more positive health outcomes”
(p. 622). ere is a plethora of research suggesting that
visual aids can lead to improved risk understanding (e.g.,
Garcia-Retamero etal., 2012, Garcia-Retamero & Cokely,
2017; Lipkus & Hollands, 1999; Zipkin et al., 2014),
especially for individuals with low numeracy (Garcia-
Retamero & Cokely, 2014). ome researchers suggest that
data visualization can alsolead to increased trust in data.
Petrova et al. (2015) found that viewing visualizations
of prostate cancer risk was associated with increased
risk comprehension which was in turn associated with a
willingness to engage in shared decision-making with a
physician, suggesting that viewing the risk visualization
increased trust in the provider.
The current study
Anecdotes, static visualizations, interactive simulations,
and narrative visualizations are all used to communicate
data about risk in popular press, but each contain dif
-
ferent types and amounts of information, as discussed
above. In the current study, we examine the potential
benefits of narrative visualizations for communicating
cumulative risk. Prior research on visual communication
of risk offers guidance on increasing concerns about
underestimated risks, but itdoes not address thecom
-
munication of cumulative risk. Rather, much of the prior
work focuses on helping individuals comprehend mag
-
nitudes of individual risks. For example, in public health
contexts perceived risk is a necessary precondition for
people to change their behavior (van der Pligt, 1996).
In this work, we focus on narrative visualizations
because they convey information about data lineage—
which is typically not conveyed by some other commonly
used mediums like static visualizations and anecdotes.
We argue that, because they communicate information
about the data-generating process, narrative visualiza
-
tions could be powerful tools for scientific communica-
tion- particularly in risk contexts where people may be
harmed if they make decisions based on inaccurate or
incomplete information. We used the COVID-19 pan
-
demic to empirically evaluate the effectiveness of nar-
rative visualizations for communicating accumulating,
quantitative risk information.
Study 1 compared the effectiveness of narrative visu
-
alizations with other common risk visualizations for
communicating the cumulative risk of COVID infection
at a anksgiving dinner. e influence of communi
-
cated data on people’s attitudes and decision-making can
also depend on people’s trust in data, calling for more
focused interdisciplinary research on the relationship
between trust and data visualization (Borgo & Edwards,
2020; Mayr etal., 2019; Park & Gil-Garcia, 2022). In a
follow-up study, we investigated the mechanism behind
the narrative visualizations effectiveness and found that
narrative visualizations increased trust in the data by
increasing feelings of understanding, while anecdotes did
not significantly affect understanding.
Study 1
e goal of Study 1 was to use the context of the COVID-
19 pandemic to test the effectiveness of narrative visu
-
alizations for communicating accumulating, quantitative
risk information. In November 2020, just a few days prior
to the USA anksgiving Holiday, we tested how different
data presentation methods (i.e., anecdotes, static visu
-
alizations, and narrative visualizations), differing in their
amount of information about the data-generating process
and appeal to emotion, would differentially impact peo
-
ple’s COVID-19-related plans and concerns during the
anksgiving season. e materials for all the conditions
were designed to closely mimic COVID visualizations
found in the popular press, while prioritizing legibility of
the text and graphs. We tested participants immediately
before anksgiving and in a follow-up study a few weeks
after to confirm whether the presentation in fact led to
Page 5 of 27
Fansheretal. Cognitive Research: Principles and Implications (2025) 10:7
any changes in plans. e complete materials, code, and
data are available at https:// osf. io/ k43ev/? view_ only=
67cc6 401b0 69468 89ac4 99fc1 9afec 2f. For exact word
-
ing of all the survey items, please see the Supplementary
Materials.
Methods
Participants
Complete data were collected from 1592 US partici-
pants (770 female; age M(SD) = 33.54(11.96)) recruited
from Prolific on Nov 25, 2020, a day before anks
-
giving, to complete session 1. Participants received an
average pay of $10.30/hr (USD), and the median com
-
pletion time for the study was 4min and 40s. A total of
1276 participants (622 female; age M(SD) = 34.02(12.10)
years) returned for the second part of this study (ses
-
sion 2) conducted from December 22, 2020 to January
1, 2021. Participants received an average pay of $9.29/
hr (USD), and the median completion time for the
study was 5min and 10s. All procedures were deter
-
mined to be exempt by theUniversity of Michigan IRB.
Design
Participants were randomly assigned to one of four
conditions where they viewed either a static visualiza
-
tion (Static Condition, n = 398), the same static visuali-
zation combined with an anecdote (Static + Anecdote
Condition, n = 399), an anecdote presented without a
visualization (Anecdote Condition, n = 398), or a nar
-
rative visualization (Narrative Visualization Condition,
n = 397).
Materials
Pre-Intervention Materials. Participants were asked to
indicate who they planned to spend anksgiving with
(alone/with household, extended family, friends/neigh
-
bors, strangers). To measure concerns about COVID-
19, participants responded to two questions:
1. How concerned are you about getting COVID-19/
Coronavirus at anksgiving?
2. How concerned are you that someone in your family
will get COVID-19 at anksgiving?
Participants rated their level of concern for each of
these questions on a scale of 0 (not concerned at all) to
100 (extremely concerned) using slider scales (anchored
at 50). To measure participants’ perceived risk of din
-
ing with an infected individual at anksgiving dinner,
we asked: “What is the risk that at least one person at
a anksgiving table with 10 people has COVID-19?”,
which they responded to on a scale of 0–100% with a
slider scale.
Intervention materials Narrative Visualization Condi
-
tion. Our narrative visualization intervention presented
participants witha series of icon arrays illustrating the
spread of COVID in a fictional community during anks
-
giving. e narrative visualization provided a step-by-step
explanation for how there is a 40% risk of COVID trans
-
mission at a anksgiving dinner of 10 people if there was
a 4% infection rate in the fictional community (Fig.1). e
narrative visualization materials were specifically inspired
by the FiveirtyEight article, “Why Even A Small anks
-
giving Is Dangerous” (Koerth & Elena, 2020).
Static Visualization Condition. e Static Visualization
Condition served as a baseline to compare withthe other
conditions. is condition was not intended to evoke an
emotional reaction like an anecdote and did not provide
information about how the data were calculated like the
narrative visualization. Participants in this condition
viewed the last page of the narrative visualization (see
Fig.1, page 7)—a bar graph showing that there would is
a 40% risk of transmission at a anksgiving dinner with
10 peoplein this scenario. Participants were not provided
with a step-by-step explanation about how that number
was calculated.
Anecdote Condition. Participants in the Anecdote
Condition read an anecdote describing an individual’s
personal experience with COVID-19 after contracting it
at Canadian anksgiving (which occurs prior to USA
anksgiving). e anecdote was written by conduct
-
ing an informal survey of social media posts describing
people’s experiences with COVID-19. e details from
multiple posts were merged with the goal of creating
a cohesive story to describe a serious case of COVID-
19, evoking an emotional response (see Fig. 2 for the
anecdote).
Static + Anecdote Condition. e Static + Anecdote
Condition (see Fig.3) presented the same graph as the
static intervention, but included a text box containing
one representative paragraph from the Anecdote Condi
-
tions materials (Fig.2). We included the Anecdote and
Static + Anecdote Conditions to see how narrative visu
-
alizations compared to typically successful interventions
in the literature on medical risk assessment (i.e., using
emotional-laden content, rather than explaining the data
or showing data without information on how they were
generated) (Fig.4).
Post-Intervention Survey. After viewing the interven
-
tion materials, participants completed a post-interven-
tion survey which again assessed their attitudes and
concerns regarding COVID-19. ey answered the
same two concern items and the perceived risk item as
in thepre-interventionsurvey. ey were also asked the
question: “How do you feel about your current plans?”
to which they responded with a slider scale from 0 (not
Page 6 of 27
Fansheretal. Cognitive Research: Principles and Implications (2025) 10:7
concerned at all) to 100 (extremely concerned). While
change in concern, change in current plans, and change
in perceived risk are the main variables of interest in our
analysis of theSession 1data, participants also answered
demographic questions, and disclosed their plans for the
December/January holidays. All items are available in the
Supplementary Materials.
Post-anksgiving Follow-up Survey. To assess the
longevity of any changes in concern, and to determine
whether the narrative visualization affected participants’
actual behavior (i.e., holiday plans), we followed up with
participants after anksgiving. For all items, please see
the Supplementary Materials.
Participants were first asked whether they planned to
attend or host/have attended or hosted any in-person
gatherings with people outside of their household for
the December/January holidays (Yes, No, Unsure). We
also asked whether COVID had changed their holiday
plans (No change; spending the holidays with fewer peo
-
ple than planned; spending the holidays with more peo-
ple than planned). We again assessed perceived risk of
COVID with the item: “Based on the previous Survey of
Holiday Plans, what is the risk that at least one person at
Fig. 1 Narrative visualization intervention materials for Study 1. The page numbers were added to the figure to show the order in which
the materials were presented to participants
Page 7 of 27
Fansheretal. Cognitive Research: Principles and Implications (2025) 10:7
a anksgiving table with 10 people might have COVID-
19?” to which participants responded with a slider scale
from 0 to 100%.
To see whether intervention condition influenced par
-
ticipants’ anksgiving activities, we asked them “Did
you make any last-minute changes to your anksgiving
plans?” (yes/no) followed by “Did your change of plans
increase or decrease the number of people you interacted
with during anksgiving?” (increased, decreased, nei
-
ther increased nor decreased). We also assessed COVID
concern with two items:
1. How concerned are you about getting COVID-
19/Coronavirus via social gatherings in Decem
-
ber? (slider scale of 0 (not concerned at all) to 100
(extremely concerned))
2. How concerned are you that someone in your family
will get COVID-19 via social gatherings in Decem
-
Fig. 2 Anecdote intervention materials. Note. Participants read an anecdote from someone who had contracted COVID at Canadian Thanksgiving.
The below is a representative excerpt from the anecdote. The full text is one page in length and is available in the Supplementary Materials
Fig. 3 Static + Anecdote Condition materials for Study 1. Note. Participants in the Static Condition viewed the bar graph without the anecdote
Fig. 4 Study 1 task flow
Page 8 of 27
Fansheretal. Cognitive Research: Principles and Implications (2025) 10:7
ber? (slider scale of 0 (not concerned at all) to 100
(extremely concerned))
Procedure
Prolific workers were invited to participate in our ini-
tial survey, conducted before American anksgiving
on November 25, 2020. After providing informed con
-
sent, participants completed the pre-intervention sur-
vey questions and were randomly assigned to view the
materials associated with one of the four experimental
conditions (Static, Static + Anecdote, Narrative Visu
-
alization, or Anecdote). ey then completed the post-
intervention survey questions followed by Subjective
Numeracy Scale (SNS-3, McNaughton etal., 2015) and
demographic questions. A few weeks later, participants
from the first part of the study were invited to complete
the Post-anksgiving survey. is survey was available
to participants from December 22, 2020 to January 1,
2021. Participants again provided informed consent and
answered the questions described above. e exact word
-
ing and ordering of all the questions from Study 1 are
available in the Supplementary Materials.
Results
e results presented are guided by our main questions
of interest:
1. Are narrative visualizations more effective than static
graphs at increasing concern and perceived risk?
2. How effective are narrative visualizations relative to
anecdotes?
3. Does adding data to the anecdote condition
(Static + Anecdote Condition) make the intervention
more effective than just viewing one or the other?
Given that we present the results of 20 various tests, we
used a Bonferroni correction to reduce the likelihood of
Type I error. us, in the following analysis, we decreased
our significance level to p < 0.003 (
α
=
.05
20
=
0.0025
).
Data exclusion
Our analysis focused on participants who indicated
before intervention that they planned to have a meal with
people outside of their home (n = 532), as these were the
participants who were not taking proper precautions. It
can beassumed that that they perceived a lower risk of
contracting COVID-19 than was appropriate. Please see
Table1 for the demographic characteristics and sample
sizes for the participants from each session after applying
our exclusion criteria.
Pre‑thanksgiving data analysis
Change in Concern. We calculated pre- and post-inter-
vention concern scores for each participant by averaging
their responses to the two concern questions in each of
the pre- and post-intervention surveys (i.e., How con
-
cerned are you about getting COVID at anksgiving?
How concerned are you that someone in your family will
get COVID at anksgiving?). Note that the responses
analyzed here were only from people who planned to
gather with other people for the 2020 anksgiving
holiday, suggesting that their concern was lower than
recommended by public health authorities at the time.
erefore, we treat an increase in concern as a positive
outcome in this study. at said, it is worth noting that
an increase in concern is not always a desirable outcome.
More generally, we hope that participants understood the
information presented and updated their beliefs based on
that information.
To compare post-intervention concern across interven
-
tion conditions, we ran an ANCOVA with condition as
the predictor and pre-intervention concern as a covari
-
ate.e Static Condition servedas the reference group for
the condition comparisons (see Fig.5a and Table2). We
found evidence that the Static Condition reported lower
concern post-intervention than the Anecdote Condition
(p < 0.001), and marginal evidence that the Static Con
-
dition reported lower concern post-intervention than
both the Static + Anecdote (p = 0.008) and Narrative
Visualization (p = 0.009) Conditions. e distributions of
pre-intervention concern, post-intervention concern, and
change in concerns are available in Fig.6. We would like
to note that many of the participants did not update their
concern score from pre- to post-intervention (see Explor
-
atory Analysis for more detail).
Next, we ran a series of one-sample t-tests to deter
-
mine if participants’ change in concern (posttest concern
score—pretest concern score) was significantly different
Table 1 Sample sizes and demographic features of samples included in the analysis
Anecdote
Condition
Static + Anecdote
Condition
Static
Condition
Narrative
Visualization
Condition
Total N Age M(SD) Gender
Pre-Thanksgiving (Session 1) 135 150 131 116 532 33.91(11.24) 242 F
Post-Thanksgiving (Session 2) 104 108 104 91 407 34.02(11.33) 201 F
Page 9 of 27
Fansheretal. Cognitive Research: Principles and Implications (2025) 10:7
from 0 in each condition. All model output is available in
Table2. We found that participants in the Static Condi
-
tion were less concerned after viewing the intervention
materials. is decrease is reminiscent of a "backfire
effect" (Lewandowsky et al., 2012). Past research has
shown that attempts to debunk misinformation often
lead to the denial of that information and further polar
-
ize opinion (Cook & Lewandowsky, 2016). When people
do not have access to information about the data-gener
-
ating process they may (understandably) find it difficult
to understand the data, which may ultimately lead to
reduced concern. However, it is worth noting that there
is currently debate as to whether such backfire effects are
robust (see Swire-ompson etal., 2020 for a review).
Using persons as effect sizes to achieve converging evi
-
dence (Grice etal., 2020), we examined the proportion
of participants per condition whose concern increased
after viewing the intervention materials (Fig. 5b). e
Static Condition was least effective at increasing peo
-
ple’s COVID-19 concern. e proportion of people who
increased their concern was similar across the other con
-
ditions, with the Anecdote Condition having the largest
proportion of participants whose concern increased.
Change in Perceived Risk. It is important to note that all
participants were given explicit instructions on the risk
Fig. 5 Study 1 Change in Concern and Perceived Risk by Condition. Note. Figure 5a illustrates mean and standard error change in concern
by condition (
postconcern
preconcern
). A change in of zero indicates no change in concern; positive scores indicate increased concern,
and negative scores indicate decreased concern. Figure 5c illustrates mean and standard error change in perceived risk (
postrisk
prerisk
).
Figures 5a and 5c are not on the same scale. Figures 5b and 5d illustrate the proportion of participants whose concern or perceived risk increased
after viewing their respective intervention
Page 10 of 27
Fansheretal. Cognitive Research: Principles and Implications (2025) 10:7
Table 2 Change in Concern and Perceived Risk from Pre- to Post-Intervention in Session 1
*indicates signicance of p < 0.003
ANCOVA Results—Session 1 (Post-Concern ~ Condition + Pre-Concern)
Coecients Estimate t value p
Intercept 0.73 0.41 0.68
Static | Static + Anecdote 5.26 2.67 0.008
Static | Narrative Visualization 5.50 2.61 0.009
Static | Anecdote 8.28 4.09 < 0.001*
Pre-intervention concern 0.87 36.54 < 0.001*
One-sample t tests—Change in concern versus 0
Condition df t p 95%CI
Static 130 − 3.16 0.002* [− 7.74, -1.78]
Anecdote 134 2.46 0.02 [0.79, 7.21]
Static + Anecdote 149 0.52 0.60 [− 1.75, 2.99]
Narrative Visualization 115 0.73 0.47 [− 1.96, 4.27]
ANCOVA Results—Session 1 (Post-Risk ~ Condition + Pre-Risk)
Coecients Estimate t value p
Intercept 25.90 14.18 < 0.001
Static | Static + Anecdote − 2.80 − 1.29 0.20
Static | Narrative Visualization 3.01 1.30 0.19
Static | Anecdote 3.36 1.51 0.13
Pre-Intervention Risk 0.45 15.79 < 0.001*
One-sample t tests—Change in Risk versus 0
Condition df t p 95%CI
Static 130 3.30 0.001* [3.23, 12.93]
Anecdote 134 8.57 < 0.001* [10.25,
16.40]
Static + Anecdote 149 3.30 0.001* [2.58, 10.27]
Narrative Visualization 115 6.19 < 0.001* [9.53, 18.49]
Fig. 6 Distributions of Concern Scores
Page 11 of 27
Fansheretal. Cognitive Research: Principles and Implications (2025) 10:7
of dining with 10 people, regardless of condition. us,
the risk question asked post-intervention -What is the
risk that at least one person at a anksgiving table with
10 people has COVID-19?” - was a simple recall task.
We compared post-intervention perceived risk between
conditionsusing anANCOVA, controlling for pre-inter
-
vention risk perception as a covariate. Pre-intervention
perceived risk predicted post-intervention perceived risk
(p < 0.001); however, there was no effect of condition on
perceived risk (see Table2).
One-sample t-tests with the test value as 0 (no change
in perceived risk) indicated that perceived risk increased
among all conditions after the intervention. Using per
-
sons as effect sizes to achieve converging evidence (Grice
etal., 2020), we examined the proportion of participants
per condition whose perceived risk increased after view
-
ing the intervention materials (Fig.5d). e Static Condi-
tion was theleast effective at increasing perceived risk,
although proportions were similar across conditions (see
Table2).
Concern about anksgiving Plans. After complet
-
ing the intervention, participants were also asked to rate
how concerned they were with their current anksgiv
-
ing plans on a scale of 0–100 (not concernedextremely
concerned). A one-way ANOVA did not find a difference
in concernbetween the four conditions (Mean Ratings:
Static Condition = 40.09, Narrative Visualization Condi
-
tion = 39.81, Static + Anecdote Condition = 42.66, Anec-
dote Condition = 43.91; see Table3).
Post‑thanksgiving follow‑up survey
We next examined the data from the participants who
returned for Session 2 and completed the Post-anks
-
giving survey. We examined data from participants who
indicated that they planned to spend anksgiving with
others during the Pre-anksgiving session (n = 407)
as we considered those participants to be the least
risk-averse.
Holiday Plans. Participants were asked to report infor
-
mation on how they spent their anksgiving holiday. We
compared the proportion of participants who reported
making last-minute changes to their anksgiving plans
(n = 80) between conditions with a Chi-square test and
found no differences between conditions (see Table3).
Participants who did change their anksgiving plans
were asked whether the change increased, decreased, or
did not impact the number of people at their gathering.
e proportion of participants who  spent the holiday
with fewer people was compared between conditions,
and there was no significant difference between condi
-
tions (see Table3).
Participants were also asked whether they planned to
spend the December/January holidays alone or with oth
-
ers. We excluded participants who responded “Unsure”
(n = 22) and compared the proportion of participants
who planned to spend the holidays with others between
the four conditions usinga Chi-square test. ere was no
significant difference between groups, with 54% of the
Static + Anecdote Condition, 59% of the Narrative Visu
-
alization Condition, 62% of the Static Condition, and 51%
of the Anecdote Condition, indicating that they planned
to spend the holidays with others (see Table3).
Participants were also asked whether the COVID pan
-
demic had changed their usual holiday plans. We com-
pared the proportion of participants from each condition
who indicated that they planned to spend the holidays
with fewer people than usual using a Chi-square test.
ere was no significant difference between conditions,
with 69% of the Static + Anecdote Condition, 80% of the
Narrative Visualization Condition, 69% of the Static Con
-
dition, and 80% of the Anecdote Condition indicating
that they planned to spend the holidays with fewer peo
-
ple than usual (see Table3).
Table 3 Behavioral Intentions Reported in Sessions 1 and 2
Pre-Thanksgiving Intentions
A. ANOVA—Comfort with Current Thanksgiving Plans
F (3,528) p
Overall Model 0.7 0.55
Post-Thanksgiving Intentions
χ
2 p
Proportion of participants who changed Thanksgiving plans by condition 3.13 0.37
Proportion of participants who decreased Thanksgiving attendees by condition 1.37 0.71
Proportion of participants who planned to spend holidays with others by condition 3.28 0.35
Proportion of participants who planned to spend holidays with fewer people by condition 6.07 0.10
Page 12 of 27
Fansheretal. Cognitive Research: Principles and Implications (2025) 10:7
Concern Ratings. We also asked participants how con-
cerned they were about getting COVID or their family
getting COVID at holiday gatherings. eir responses to
these two items were averaged to create a composite con
-
cern score. We compared average concern between con-
ditions with a one-way ANOVA and found no significant
differences in concern between conditions (see Table4).
We did not use an ANCOVA as in Session 1 since we
did not have participants rate their concern about the
December holidays pre-intervention.
Change in Perceived Risk. Participants asked to report
the risk (0–100%) that at least one person at a table of 10
would have COVID at a anksgiving dinner. We investi
-
gated change in perceived risk using an ANCOVA model
with condition as the predictor and pre-intervention
perceived risk as the covariate. We found no significant
differences between groups (see Table 4). One-sample
t-tests compared thechange in perceived risk from pre-
intervention to Session 2 to 0 for each condition, and
we did not find evidence for significant increases in per
-
ceived risk for any of the conditions (see Table4).
Exploratory analyses
We ran a series of exploratory analyses to investigate (1)
the factors leading to participants’ decisions and concern
relating to real-world plans, and (2) the factors leading
participants to change their concern/perceived risk after
viewing the intervention materials. Since these analyses
are exploratory, we decided to be more lenient with our
interpretation of significance, setting our alpha level to
p < 0.05 (Fig.7).
Exploratory Mediation Models. As reported above, we
did not find an effect of condition on participants’ real-
world behavioral intentions. us, we explored the pos
-
sibility that post-intervention concern mediated the
relationship between condition and real-world behav
-
ioral intentions and decision-making. We ran a series
of mediation analyses with condition as the predictor,
pre-intervention concern as a covariate, post-interven
-
tion concern as the mediator, and real-world decisions/
concern about plans as the outcome variables. For each
model, only two conditions were included, with the
Static Condition as the reference group, considering that
the Static Condition showed a backfire effect in Session
1. Using the Static Condition as the reference condi
-
tion also allowed us to investigate the impact of adding
an anecdote (Static + Anecdote) or explanation of the
data (Narrative Visualization) to data on concern about
plans/behavioral intentions. e specific outcome vari
-
ables used were: (1) concern about current anksgiving
plans (collected in Session 1), (2) whether participants
actually changed their anksgiving plans after the inter
-
vention (collected in Session 2), and (3) whether par-
ticipants planned to spend the December holidays with
others (collected in Session 2). All of the mediation mod
-
els were run using the PROCESS macro in R (Hayes,
2017), with the Model 4 specification. We did not find
any evidence that post-intervention concern mediated
the relationship between condition and whether par
-
ticipants actually changed their anksgiving plans, nor
whether participants planned to spend the December
holidays with others. However, we found evidence for a
Table 4 Concern and Change in Perceived Risk in Session 2
*indicates signicance of p < .003
F (3403) p
Overall Model 1.08 0.36
ANCOVA Results—Session 1 (Post-Risk ~ Condition + Pre-Risk)
Coecients Estimate t value p
Intercept 28.76 10.74 < 0.001
Static | Static + Anecdote 1.43 0.44 0.66
Static | Narrative Visualization − 0.31 − 0.09 0.93
Static | Anecdote 0.65 0.20 0.84
Pre-Intervention Risk 0.17 3.88 < 0.001*
One-sample t tests—Change in Risk versus 0
Condition df t p 95%CI
Static 103 1.29 0.20 [− 2.18, 10.30]
Anecdote 103 1.49 0.14 [− 1.61, 11.36]
Static + Anecdote 107 1.74 0.08 [− 0.77, 12.03]
Narrative Visualization 90 1.99 0.05 [0.02, 13.54]
Page 13 of 27
Fansheretal. Cognitive Research: Principles and Implications (2025) 10:7
full mediation of post-intervention concern on the rela-
tionship between condition and participants’ concern
with their current anksgiving plans in Session 1. is
evidence was present for all three models comparing the
Static Condition to the rest of the intervention conditions
(see Fig. 8). ese findings suggest that relative to the
Static Condition, the other interventions led to increased
concern, which in turn increased participants’ concerns
about their anksgiving plans. Given that all partici
-
pants planned to celebrate anksgiving with others,
this suggests that participating in the three interventions
impacted their risk perception of real-world decisions in
a positive way relative to the Static Condition.
Predicting Increased Concern. As illustrated by Fig.6,
it was surprising to us that many participants’ concerns
did not increase after viewing their respective interven
-
tion. us, we ran an exploratory analysis examining the
factors predicting whether one’s concern increased after
viewing an intervention. We ran a logistic regression
with political partisanship (coded as right, left, or other,
with right as the reference group), numeracy based on
the subjective numeracy scale, condition (with the Static
Condition as the reference group), and pre-intervention
Fig. 7 Distributions of Perceived Risk
Fig. 8 Exploratory Mediation Models. Note: For each model, the Static Condition is used as the reference group for each Condition comparison.
*p < .05, *p < .001
Page 14 of 27
Fansheretal. Cognitive Research: Principles and Implications (2025) 10:7
concern as predictors, and whether participants’ concern
increased (1) or stayed the same/decreased (0) as the out
-
come variable:
We found evidence that participants in all three condi
-
tions were more likely to increase their level of concern
than the Static Condition, and that participants who
identified with the political left (i.e., Democrat or Lean
Democrat) were more likely to increase their level of con
-
cern than participants who identified with the political
right (i.e., Republican or Lean Republican) (see Table5).
ere was no effect of numeracy nor pre-intervention
concern on whether participants’ concern increased.
Discussion
e goal of Study 1 was to examine whether narrative
visualizations are an effective tool for communicating
cumulative risk, as well ashow other interventions such
as the use of anecdotes and static data visualizations
influenced perceptions of cumulative risk. We collected
these data in a unique scenario where participants rea
-
soned about the real-world risks associated with dining
with others during the COVID pandemic in 2020. Our
most notable finding was a data backfire effect in the
Static Condition in Session 1, suggesting that viewing the
cumulative risk data led to decreased concern about the
risk of contracting COVID at anksgiving dinner. It is
important to note that this lack of concern was not due
toa misunderstandingof the risk information presented
to participants. Indeed, participants across all conditions
were presented with the same cumulative risk of con
-
tracting COVID from a 10-person dinner, and there were
no significant differences in perceived risk post-interven
-
tion between conditions. is suggests that although par-
ticipants in the Static Condition perceived risk similarly
Increased Concern
(
0
or
1
)
Condition
+
Partisanship
+
Numeracy
+
Pre
Concern
to the other conditions, they were simply less concerned
about the risk.
Table 5 Factors associated with concern increase—logistic
regression output
Estimate Std. Error z value p
Intercept − 1.10 0.49 − 2.24 0.03
Static + Anecdote versus Static 0.60 0.25 2.40 0.02*
Narrative Visualization ver-
sus Static
0.64 0.27 2.38 0.02*
Anecdote versus Static 0.80 0.26 3.12 0.002*
Partisanship: Left versus Right 0.53 0.23 2.30 0.02*
Partisanship: Other versus Right 0.07 0.26 0.24 0.80
Numeracy 0.03 0.08 0.39 0.70
Pre-intervention concern − 0.003 0.003 − 0.99 0.32
Four weeks after the intervention, we did not find statis-
tically significant differences between conditions in terms
of their behavioral intentions. A similar proportion of par
-
ticipants reported changing their anksgiving plans after
the intervention among conditions, and the proportion
of participants who planned to spend the holidays alone
or with fewer people was also similar across conditions.
We found little evidence that the effects of the interven
-
tion were long lasting, as concern ratings and change in
perceived risk were similar across conditions at the Post-
anksgiving survey stage. However, in an exploratory
mediation analysis we foundevidence that relative to the
Static Condition, participating in the Static + Anecdote,
Narrative Visualization, and Anecdote Condition was
associated withgreatergeneral concernwhich was asso
-
ciated with increasedconcern about one’s anksgiving
plans to dine with others. is provides some evidence
that participants connected the data presented to them to
their real-world decisions and intentions.
Lastly, we found that a large number of participants
did not report any increase in concern from pre- to post-
intervention. In an exploratory analysis, we found evi
-
dence that participants who politically leaned to the left
were more likely to increase their concern from pre- to
post-intervention compared to those who leaned right.
is suggests that left-leaning participants may have
been more inclined to update their beliefs in the presence
of new information.
e most notable finding from Study 1 was that view
-
ing data by itself led to a backfire effect—where partici-
pants were less concerned about a risk after viewing a
bar graph illustrating cumulative risk. However, viewing
static data in conjunction with an anecdote prevented
this data backfire effect, as did viewing a narrative vis
-
ualization. In Study 2, we further explore why these
interventions did not lead to a backfire effect by exam
-
ining how the various interventions influenced trust and
understanding of the data.
Study 2
Study 1 showed that viewing a static visualization of
cumulative risk was associated with increased risk per
-
ception but decreased concern from pre- to post-inter-
vention. is backfire effect was not present inthe other
conditions where participantsviewed either an anecdote
by itself, a static visualization with an anecdote, or a
Page 15 of 27
Fansheretal. Cognitive Research: Principles and Implications (2025) 10:7
narrative visualization. is provides promising evidence
that interventions can include data without leading to
backfire effects on concern. It is also encouraging that the
Narrative Visualization Condition did not lead to a data
backfire effect even though this intervention did not rely
on emotionalappeal like the other conditions (Anecdote/
Static + Anecdote Conditions), suggesting that narrative
visualizations may be an effective tool for communicating
risks. Why didnt the Narrative Visualization Condition
lead to the data backfire effect that occurred in the Static
Condition? It is possible that people who viewed the nar
-
rative visualization were able to obtain a better under-
standing and/or had more trust in the data because of
the additional information provided by the walkthrough,
compared to the static visualization. e aim of Study 2
was to replicate our findings from Study 1 while examin
-
ing whether the additional information provided by the
narrative visualization affected trust, subjective under
-
standing, or both. We also examined how these variables
were influenced by the presence of anecdotes, which do
not include information about how the data were gener
-
ated, but also mitigated thebackfire effectin Study 1. All
data, materials, and code are available at https:// osf. io/
k43ev/? view_ only= 67cc6 401b0 69468 89ac4 99fc1 9afec 2f.
Methods
Participants
We recruited 1592 US participants (806 female; age
M(SD) = 44.67(16.04)) from Prolific between January 6,
2022 and February 5, 2022. Participants received an aver
-
age pay of $17.04/hr (USD), and the median completion
time for the study was 5min and 36s. All procedures
were determined to be exempt by the University of Mich
-
igan IRB.
Design
As in Study 1, participants were randomly assigned to
one of four conditions in which they viewed either a static
visualization (Static Condition, n = 399), the same static
visualization combined with an anecdote (Static + Anec
-
dote Condition, n = 388), an anecdote presented without
a visualization (Anecdote Condition, n = 404), or a nar
-
rative visualization (Narrative Visualization Condition,
n = 401).
Materials
Pre-Intervention Materials. Data for Study 2 were col-
lected in January and February of 2022, so we made
severalchanges to the Study 1 materials to reflect devel
-
opments that had occurred in the world (e.g., the devel-
opment of COVID-19 vaccines) during the time between
the two studies. To account for changes in the COVID-19
pandemic and access to vaccines, participants were asked
to make judgments about their hypothetical anksgiv
-
ing plans during a pandemic with a fictional disease for
which there was not yet a vaccine (as there were no vac
-
cines available for COVID-19 when the Study 1 data were
collected). All items with exact wording are included in
the Supplementary Materials. All participants were pre
-
sented with this scenario:
Imagine the following scenario: a new respiratory
disease emerged in your community. is new dis
-
ease is highly contagious, and there is no vaccine
available yet. e Fall & Winter holiday season is
approaching, and public health officials recommend
that people do not gather during the holidays, but
your local government has not placed any restric
-
tions on gathering.
Please answer some questions about what your
anksgiving plans would have been this year if you
were living through the above hypothetical sce
-
nario.
Next, participants were asked who they would have
anksgiving dinner with in this hypothetical scenario
(i.e., alone/with people who live with you, extended fam
-
ily, friends/neighbors, strangers). ey were also asked to
indicate how concerned they would be about contracting
the disease at anksgiving dinner and how concerned
they would be that someone else at their anksgiving
dinner would contract the disease (slider scales from 0
(not concerned at all)—100 (extremely concerned) for
each item). To assess understanding of risk, participants
were asked “If 4% of the population has the disease, what
is the risk that at least one person at a anksgiving table
with 10 people has the disease?” which they answered
with a slider scale from 0 to 100%.
Intervention Materials. We made several refinements
to the Study 1 intervention materials for Study 2. In the
Static and Static + Anecdote Conditions of Study 2, we
presented the risk information as an icon array (Fig.9),
rather than the bar graph from Study 1 (Fig. 3). Icon
arrays are thought to increase understanding of risk, par
-
ticularly for individuals with low numeracy (Okan etal.,
2015). We made this change to improve the robustness of
the Static Condition as a baseline. In addition, the narra
-
tive visualization in Study 1 used seven consecutive dis-
plays (Fig.1), while the static intervention used a single
static visualization (Fig.3). To reduce the difference in
time on task between these two interventions, we short
-
ened the narrative visualization to just three displays
(Fig.10). Finally, the anecdote intervention in Study 1 did
not include the quantitative risk data presented in the
other interventions. To reduce differences between inter
-
ventions, we modified the final paragraph of the anecdote
Page 16 of 27
Fansheretal. Cognitive Research: Principles and Implications (2025) 10:7
to include the critical quantitative conclusion below
(bold text was added for Study 2):
I’m so scared, and words can’t describe the guilt I
feel. I know I’m just one person, but if you take any
-
thing away from this story please STAY HOME. e
pandemic has been hard for everyone, and we
all miss our families, but the risks can be much
higher than we realize. I found out afterward
that (according to the CDC) even though only
about 4 out of 100 people are infected, there was
about a 40% chance that someone at our dinner
of 10 was infected. So please don’t risk something
like this happening to you or your family. Stay home
not just to protect yourself, but to protect your loved
Fig. 9 Static + Anecdote Condition materials for Study 2. Note. The Static Condition materials contained the same icon array without the anecdote.
The Anecdote Condition materials contained an expanded version of the anecdote with no visualized data (see Supplementary Materials)
Fig. 10 Study 2 Narrative Visualization Materials. Note. Participants viewed three consecutive pages which explain step-by-step how the risk
of exposure to the disease is about 40% in the study scenario (i.e., a Thanksgiving dinner with 10 people taking place in a community where 4
out of 100 people are infected)
Page 17 of 27
Fansheretal. Cognitive Research: Principles and Implications (2025) 10:7
ones too.
We also modified the anecdote to remove any lan
-
guage that referred to COVID specifically. e full anec-
dote used in Study 2 is available in the Supplementary
Materials.
Post-Intervention Survey Materials. After viewing the
intervention materials, we asked participants to rate
their concern about contracting the disease at anksgiv
-
ing dinner for themselves and for others on slider scales
of 0 (not concerned at all) to 100 (extremely concerned).
We also asked them how they felt about the hypotheti
-
cal anksgiving plans they reported pre-intervention
with a slider scale from 0 (not concerned at all) to 100
(extremely concerned).
To measure self-rated understanding, we asked partici
-
pants to rate their level of agreement with the following
statement from 0 to 100: “After viewing the data, I under
-
stand why a anksgiving dinner with 10 people has
about a 40% chance of exposure to the disease.” To meas
-
ure trust, we asked them to rate their level of agreement
on scales of 0–100 with the following items:
1. I feel like the data are intended to accurately portray
the risks of the new disease.
2. I feel like the data do accurately portray the risks of
the new disease.
Lastly, participants provided demographic information,
including numeracy and political partisanship data as in
Study 1.
Procedure
e procedure for Study 2 was the same as for Study 1
except thatparticipants were not invited for a follow-up
survey four weeks later.
Results
As in Study 1, given that we present the results of mul-
tiple tests, we used a Bonferroni correction to reduce
the likelihood of Type I error. erefore, in the following
analyses comparing concern, trust, and understanding
between conditions, we lowered our significance level to
p < 0.007, as 7 of our tests did not correct for multiple
comparisons (
=
=
).
Exclusion criteria
e Post-Intervention Survey contained two attention
check items:
1. According to the data at the end of the story, what is
the approximate risk that you will be exposed to the
disease if 4 out of 100 people in your community are
infected and you have dinner with 10 people? (40%,
4%, 10%, or 14%?)
2. In the hypothetical scenario you read at the begin
-
ning of the study, was a vaccine available for the dis-
ease? (Yes, one vaccine, Yes, multiple vaccines, or
No).
Participants were excluded from the analysis if they
gave an incorrect response on one or both attention
check questions (n = 156). Of participants who passed
both attention check items, only 12% of participants
(n = 173) stated that they would eat dinner with people
outside of their home, so we were unable to divide the
data based on dinner intentions as we did for Study 1.
Post‑intervention concern
All model output is available in Table6. To compare post-
intervention concern across intervention conditions, we
ran an ANCOVA with condition as the predictor, pre-
intervention concern as the covariate, and post-inter
-
vention concern as the outcome variable of interest (see
Fig.11a). Unlike Study 1, there were no significant differ
-
ences between conditions.
Next, we ran a series of one-sample t-tests with the
comparison value as 0 to determine if change in con
-
cern significantly increased or decreased in each condi-
tion. After correcting for multiple tests, we did not find
evidence for a significant increase or decrease in partici
-
pants’ concern in any of the conditions. Using persons as
effect sizes to achieve converging evidence (Grice etal.,
2020), we examined the proportion of participants per
condition whose concern increased after viewing the
intervention materials (Fig.11b). Similar to Study 1, the
Static Condition was least effective at increasing people’s
COVID-19 concern. Distributions of pre-intervention,
post-intervention, and change in concern scores are
shown in Fig.12.
The eect ofcondition ontrust andunderstanding
Our primary question in Study 2 was whether the nar-
rative visualization intervention and Static + Anecdote
Conditions prevented a data backfire effect because par
-
ticipants felt that they understood and trusted the data
more than in the Static Condition. We first tested our
hypothesis that viewing the narrative visualization would
improve understanding and trust compared to the Static
Condition.
Self-rated understanding was measured through
response to the item “After reading the story, I under
-
stand why a anksgiving dinner with 10 people has
Page 18 of 27
Fansheretal. Cognitive Research: Principles and Implications (2025) 10:7
about a 40% chance of exposure to the disease.” To exam-
ine the role of condition on self-rated understanding, we
ran a one-way ANOVA with Condition as the predictor
and understanding as the outcome variable. e model
was significant overall (see Fig. 13a; Table 7) and post
hoc tests with Tukey HSD indicated that participants in
the Narrative Visualization Condition reported greater
understanding than those inthe Static + Anecdote Con
-
dition, the Anecdote Condition, and the Static Condition.
Trust was calculated by averaging participants’
responses to the two items: (1) "I feel like the story is
intended to accurately portray the risks of the new dis
-
ease" and (2) "I feel like the story does accurately portray
the risks of the new disease". We again ran a one-way
ANOVA with condition as the predictor and trust as the
outcome variable. e model was significant overall (see
Fig. 13b; Table 7) and post hoc tests with Tukey HSD
indicated that participants in the Narrative Visualization
Condition reported greater trust than those inthe Static
Condition.
Mediation analysis
e primary goal of Study 2 was to investigate the pos-
sible mechanisms that allowed participants in the Nar-
rative Visualization Condition to view the data without
experiencing a data backfire effect (as seen in the Static
Table 6 Change in Concern from Pre- to Post-Intervention in Study 2
*Indicates signicance of p < .007
ANCOVA Results—Study 2 (Post-Concern ~ Condition + Pre-Concern)
Coecients Estimate t value p
Intercept 11.99 7.68 < 0.001
Static | Static + Anecdote 3.34 2.29 0.02
Static | Narrative Visualization 2.35 1.63 0.10
Static | Anecdote 0.92 0.63 0.53
Pre-intervention concern 0.80 45.53 < 0.001*
One-sample t tests—Change in concern versus 0
Condition df t p 95%CI
Static 356 − 0.95 0.34 [− 3.06, 1.06]
Anecdote 354 − 0.44 0.66 [− 3.01, 1.91]
Static + Anecdote 350 2.26 0.02 [0.30, 4.31]
Narrative Visualization 372 1.20 0.23 [− 0.73, 2.99]
Fig. 11 Study 2 Changes in Concern and Perceived Risk by Condition. Note. Figure 11a illustrates condition means and standard error. A change
in concern of zero indicates no change; positive scores indicate increased concern, and negative scores indicate decreased concern. Figure 11b
shows the proportion of participants whose concern increased in each condition
Page 19 of 27
Fansheretal. Cognitive Research: Principles and Implications (2025) 10:7
Condition). As such, we investigated the impact of inter-
vention condition on participants’ self-reported under-
standing of the data and trust in the data.
Mediation analysis—comparing narrative visualization
andstatic conditions
We conducted a mediation analysis using the PROCESS
macro in R (Hayes, 2017) to determine whether the
impact of the narrative visualization on post-concern rat
-
ings was serially mediated by understanding and trust
in the data (Model 6 in the PROCESS macro). We ran
a mediation analysis with understanding and trust as
mediators on the effect of the narrative visualization (IV)
on post-intervention concern (DV) (Fig.14). To isolate
the specific effect of the narrative visualization, we com
-
pared the narrative visualization intervention to the static
intervention. Pre-intervention concern was included as a
covariate.
e Effect of Condition on Understanding and Trust.
We ran a serial model with the idea that understanding
affects trust. e outcome is shown in Fig.14. We found
evidence for an independent effect of condition on under
-
standing: participants in the Narrative Visualization
Condition reported greater understanding compared to
Fig. 12 Distributions of concern scores
Fig. 13 Study 2 self-rated understanding and trust by condition. Note. Figures 13a–b illustrate mean and standard error for self-rated understanding
and trust by condition
Page 20 of 27
Fansheretal. Cognitive Research: Principles and Implications (2025) 10:7
participants in the Static Condition (b = 12.12, p < 0.001,
95% CI = [8.00, 16.24]). ere was also an independ
-
ent effect of understanding on trust, such that greater
understanding was associated with higher trust (b = 0.75,
p < 0.001, 95% CI = [0.70, 0.79]). However, there was no
significant independent effect of condition on trust itself
(b = 0.65, p = 0.64, 95% CI = [ 3.41, 2.10]).
Predicting Post-Intervention Concern. ere was an
independent effect of trust in the data on concern about
disease transmission post-intervention (b = 0.09, p = 0.01,
Table 7 Trust and understanding by condition in Study 2
*Indicates signicance of p < .007
Study 2—Understanding
ANOVA—Understanding by Condition
F (3,1432) p
Overall Model 21.16 < 0.001*
Tukey HSD Signicant Pairwise Comparisons
M
di
p
adj
95%CI
Narrative Visualization | Static 13.48 < 0.001* [7.37, 19.59]
Anecdote | Static + Anecdote − 4.41 0.26 [− 10.59, 1.77]
Anecdote | Narrative Visualization − 17.89 < 0.001* [− 23.98, − 11.80]
Static | Static + Anecdote 1.08 0.97 [− 5.09, 7.26]
Static | Narrative Visualization − 12.39 < 0.001* [− 18.48, − 6.31]
Anecdote | Static − 5.49 0.10 [− 11.65, 0.67]
Study 2—Trust
ANOVA—Trust by Condition
F (3,1432) p
Overall Model 6.23 < 0.001*
Tukey HSD Signicant Pairwise Comparisons
M
di
p
adj
95%CI
Narrative Visualization | Static 6.28 0.02 [0.59, 11.97]
Anecdote | Static + Anecdote 3.68 0.36 [− 2.08, 9.43]
Anecdote | Narrative Visualization − 2.60 0.64 [− 8.28, 3.07]
Static | Static + Anecdote − 2.53 0.67 [− 8.28, 3.22]
Static | Narrative Visualization − 8.81 < 0.001* [− 14.47, − 3.14]
Anecdote | Static 6.20 0.03 [0.47, 11.94]
Fig. 14 Study 2: Mediation model comparing the Narrative Visualization Condition to the Static Condition. Notes: *p < 0.05; **p < 0.001. Thick lines
show the significant indirect pathway from condition to post-intervention concern via understanding and trust
Page 21 of 27
Fansheretal. Cognitive Research: Principles and Implications (2025) 10:7
95%CI = [0.02, 0.17]). However, reported understanding
did not independently predict post-intervention concern
(b = 0.005, p = 0.90, 95% CI = [ 0.08, 0.07]). ere was
also no evidence of a direct effect of condition on post-
intervention concern (b = 1.57 p = 0.25, 95% CI = [ 1.15,
4.30]).
Mediating the Relationship Between Condition and
Concern. We did not find evidence for an indirect effect
of condition on concern through understanding (indi
-
rect effect = 0.05, SE = 0.44, 95% CI = [ 0.94, 0.82]) or
trust (indirect effect = 0.06, SE = 0.15, 95% CI = [ 0.38,
0.21]). However, the data suggested that condition had
an indirect effect on concern through both self-rated
understanding and trust (indirect effect = 0.85, SE = 0.37,
95% CI = [0.20, 1.64]). ese findings suggest a full serial
mediation of understanding and trust on the relationship
between condition and post-intervention concern.
Mediation analysis—comparing static + anecdote andstatic
conditions
In Study 1, we found evidence of a data backfire effect,
in which participants in the Static Condition reported
feeling less concerned after seeing the data. Even though
participants in the Static + Anecdote and Narrative Visu
-
alization Conditions alsoviewed data, they did not expe-
rience the same effect. erefore, we thought it would be
helpful to examine the mechanisms that prevented the
data backfire effect in the Static + Anecdote Condition.
We again conducted a mediation analysis toexamine the
effect of condition, understanding, and trust on post-
intervention concern.
e Effect of Condition on Understanding and Trust.
We did not find an independent effect of condi
-
tion on reported understanding (b = 1.14, p = 0.65,
95% CI = [ 5.95, 3.67]); however, participants in the
Static + Anecdote Condition reported feeling higher
levels of trust than participants in the Static Condi
-
tion (b = 3.28, p = 0.02, 95% CI = [0.58, 5.97]). As in
the previous analysis, there was an independent effect
of understanding on trust; participants who reported
greater understanding also reported greater trust in the
data (b = 0.72, p < 0.001, 95% CI = [0.12, 0.22]).
Predicting Post-Intervention Concern. We found evi
-
dence for a direct effect of condition on post-intervention
concern, such that participants in the Static + Anecdote
Condition reported greater concern than the Static Con
-
dition (b = 3.15, p = 0.03, 95% CI = [0.40, 5.91]). ere
was also an independent effect of trust on concern,
with participants reporting greater trust also reporting
greater concern post-intervention (b = 0.08 p = 0.03, 95%
CI = [0.007, 0.16]). ere was no independent effect of
reported understanding on post-intervention concern
(b = 0.02, p = 0.63, 95% CI = [ 0.05, 0.08]).
Mediating the Relationship Between Condition and
Concern. We did not find evidence for an indirect effect
of condition on concern through understanding (indirect
effect = 0.02, SE = 0.11, 95% CI = [ 0.27, 0.21]) or trust
(indirect effect = 0.27, SE = 0.20, 95% CI = [ 0.005, 0.74]).
Unlike the previous analysis, we found no evidence of
an indirect effect of condition on post-intervention con
-
cern through both understanding and trust (indirect
effect = 0.07, SE = 0.17, 95% CI = [ 0.45, 0.25] (see
Fig.15).
Discussion
Unlike inStudy 1, we did not observe statistically sig-
nificant differences between intervention conditions in
changes in concern. Study 2 results likely differed from
Study 1 because of the differences in context between
the two studies: Study 1 assessed concern about real-
life plans during a time where COVID-19 was much
less understood, while Study 2 asked about concern for
hypothetical plans and was administered after the virus
was better understood and a vaccine was widely available.
Additionally, Study 2 includeddata from all the partici
-
pants, whereas Study 1 exclusively used data from par-
ticipants who were the least risk averse (indicated they
Fig. 15 Study 2: Mediation model comparing the Static + Anecdote Condition to the Static Condition. Notes: *p < 0.05; **p < 0.001
Page 22 of 27
Fansheretal. Cognitive Research: Principles and Implications (2025) 10:7
planned to spend anksgiving with people outside of
their household).
e findings from Study 2 provide a mechanistic expla
-
nation for why participants in the Narrative Visualization
Condition did not experience a backfire effect observed
in the Static Condition. Our data suggest that viewing a
narrative visualization led to increased subjective under
-
standing of data and greater trust. Most importantly,
a mediation analysis indicated that viewing the narra
-
tive visualization increased concern relative to viewing
a static visualization, and that this effect was produced
through increasing understanding and trust in the data.
us, the model shows that the narrative visualization
helped people feel more confident in their understand
-
ing of the risk, which, in turn, increased their trust in the
information,ultimately resulting in an increase in overall
concern.
General discussion
Helping the general public accurately reason about data,
especially in contexts related to real-world decision-mak
-
ing, is an important goal of science communication. e
current investigation builds on previous work examining
how viewing quantitative data and reading affect-laden
stories shape risk perception across various domains
(Cutello etal., 2020; Fagerlin etal., 2005; Rakow etal.,
2015; Tiede etal., 2022). In Study 1, we compared differ
-
ent methods for presenting cumulative risk data regard-
ing the likelihood of contracting COVID during the 2020
holidays. We tested the effectiveness of narrative visuali
-
zations, a method of data communication that walks par-
ticipants through how data are generated, as a scientific
communication tool. We also examined the impact of
static data visualizations and anecdotal evidence on risk
perception and concern about the COVID pandemic.
e context ofstudy allowed us to investigate how these
interventions affected individuals with suboptimal risk
perception or concern, in that participants included in
our data analysis planned to dine with people outside
of their household during the 2020 American anks
-
giving holiday. We were also able to examine the effects
of viewing the intervention materials on perceptions of
real-world decisions given the longitudinal design ofin
Study 1. Akey finding from Study 1 was that participants
who viewed a static visualization of cumulative risk expe
-
rienced a backfire effect—even though their perceived
risk of dining with individuals outside of their home
increased from pre- to post-intervention, their concern
about contracting COVID from pre- to post-intervention
actually decreased. is backfire effect was not present
observed in the Static + Anecdote, or the Narrative Visu
-
alization Conditions.
It is perhaps unsurprising that the presence of an anec
-
dote in the Static + Anecdote Condition helped mitigate
this backfire effect, given prior work demonstrating the
potential for emotion-laden stories to increase perceived
risk (Freling etal., 2020). Frequently, risk avoidance strat
-
egies rely on providing people with alarming stories. For
instance, to avoid driving distracted or driving under the
influence of drugs or alcohol, risk messaging relies on
impressionable and emotional testimonials rather than
quantitative data (Betsch etal., 2011; Janssen etal., 2013;
Kim etal., 2017). Although anecdotes are often effective,
they can be problematic. Typically, anecdotes rely on
affect-based mechanisms activating heuristic thinking,
rather than analytic mechanisms (Rodriguez etal., 2016).
Consequently, people tend to ignore quantitative data and
instead make decisions based on testimonials or anec
-
dotes even when data provided are more representative.
Narrative visualizations, in contrast, do not rely on
affective responses anddirectly address the data at hand.
In Study 1, we found that viewing a narrative visualiza
-
tion explaining how the accumulated risk data were
generated prevented a backfire effect. is suggests that
in real-world contexts where perceived risk is subopti
-
mal, narrative visualizations can influence people’s atti-
tudes by communicating quantitative data, rather than
relying on stories or testimonials. Why was the nar
-
rative visualization effective? Our mediation analysis
in Study 2 indicated that the mechanisms driving this
effect wereunderstanding and trust. A serial mediation
analysis revealed a full serial mediation of understand
-
ing and trust on the relationship between condition
and post-intervention concern. It is perhaps unsurpris
-
ing that the narrative visualization increased perceived
understanding and trust; the narrative visualization
explainsthe data-generating processin detail, providing
a causal explanation for the data. Research suggests that
providing coherent explanatory frameworks is associ
-
ated with higher judgements of information credibility
(Sloman, 1994; agard, 2007). Causal explanations are
also more effective in correcting misinformation com
-
pared to providing people with corrective information
(Lammers etal., 2020; Lewandowsky etal., 2012; Nyhan
& Reifler, 2015; Seifert, 2002). Understanding and trust
did not mediate the relationship between condition and
concern when comparingthe Static + Anecdote and Static
Conditions, suggesting that theStatic + Anecdote Condi
-
tion mitigated backfire through a mechanism other than
increased trust and understanding.
Implications forscientic communication
Our findings provide evidence that risks can be com-
municated as quantitative data in a manner that still
increases people’s understanding and trust in data.
Page 23 of 27
Fansheretal. Cognitive Research: Principles and Implications (2025) 10:7
ese findings suggest that narrative visualizations may
be an effective tool for combating misinformation about
data. Over- and underestimation of risks are common
types of misinformation. Underestimation of risk, as
observed in the present studies, is particularly com
-
mon, and it is difficult to convince individuals to follow
public health guidelines when risks are underestimated.
Overestimation of risks is also problematic. For exam
-
ple,vaccine-hesitant individualsoverestimate the risk of
vaccines and believethese risks are more common and
serious than they are in reality. Most prior work on cor
-
recting misinformation, however, focuses on correct-
ing inaccurate facts (e.g., the belief that vaccines cause
autism) rather than correcting data. For contexts such
as vaccine hesitancy, there is some evidence that correc
-
tions can backfire (Nyhan & Reifler, 2015). In Study 1, we
demonstrate that correcting quantitative-and not just
factual-misinformation can also lead to backfire when
data are surprising. Furthermore, we show that causal
explanations are effective not only for correcting misin
-
formation about facts, but also about data.
Our results demonstrate that narrative visualiza
-
tions can be more effective than static visualizations for
communicating risk, particularly in situations like the
COVID-19 pandemic. Beyond this scenario, we believe
narrative visualizations have the potential to communi
-
cate about complex, unintuitive, or surprising data in var-
ious other contexts. In particular, narrative visualizations
may be useful in contexts involving accumulation(e.g.,
exposure to environmental toxins, or repeated engage
-
ment in risky behaviors) or exponential growth (com-
pound interest in the context of financial literacy, or
the spread of wildfires in the context of public safety),
and in explaining outcomes related to algorithmic deci
-
sion-making (e.g., recidivism prediction; O’Neil, 2017).
By interleaving visualizations with explanations of the
underlying data-generating processes, narrative visuali
-
zations can help readers build connections between the
story and rationale,consequently increasing their trust in
thedata.
However, trust is not always beneficial. For example,
Padilla et al. (2022a, 2022b) found that viewing lower-
complexity visualizations was associated with increased
trust in the data; however, thisled to poor decision qual
-
ity. Similarly, McGinnies and Ward (1980) found that
people were more persuaded by a source’s trustworthi
-
ness than by their expertise, and O’Brien et al. (2021)
found that greater trust in science can make people vul
-
nerable to believing pseudoscience. It is important to
note that data visualization should not lead to blind trust.
Instead, visualizations should optimize the presentation
of data so that users can critically evaluate information
(i.e., users should “calibrate” their trust: Elhamdadi etal.,
2022a, 2022b; Han & Schulz, 2020). is goal assumes
that the designer wants the user to accurately understand
the data and not misinform (e.g., see Ethical Interaction
eory; Feltz & Cokely, 2024 for further discussion).
Open questions
Our findings raise several questions for further explo-
ration. First, we did not find an effect of condition as it
pertained to real-world decision-making and perceptions
of decisions (i.e., whether one changed their anksgiv
-
ing or December holiday plans in Study 1). In Study 1, we
ran an exploratory analysisto investigate whether post-
intervention concern mediated the relationship between
condition and real-world outcomes. We found evidence
that,compared to the Static Condition, participants in
the three other intervention conditions (Static + Anec
-
dote, Narrative Visualization, and Anecdote) reported
increased concern, which was associated with increased
concern about their own anksgiving plans. ese find
-
ings suggest that such interventions may have poten-
tial to influence real-world decision-making. Future
research should examine the conditions under which
risk communication methods are most likely to influence
real-world behaviors and decisions.
In Study 1, we found that many participants’ concerns
about COVID did not change from pre- to post-inter
-
vention. e reason why this was the case is unclear. We
present an exploratory analysis examining some of the
factors that may have contributed to one’s willingness
to increase their concern post-intervention. We found
evidence that participants who identified with a right-
leaning political ideology were less likely to increase their
concern than left-leaning participants. is suggests that
right- and left-leaning individuals may process data dif
-
ferently from one another, and that researchers should
consider how to best communicate risk to both groups.
Political partisanship is just one factor that could influ
-
ence a persons willingness to update their concern about
a risk in the face of evidence. Future studies should inves
-
tigate the factors thatinfluence one’s openness to updat-
ing their beliefs.
We would like to note that in the specific example
used in this study, there was an optimal estimation of
risk (i.e., 40%). However, the translation of that esti
-
mate to concern will vary by individual. For example,
an extremely ris- averse person or someone with a pre-
existing condition may perceive the risk of 40% as greater
than another individualwould. Whether one is overes
-
timating or underestimating possible risks depends on
many individual factors, as well as the consequences of
misestimating. In the specific example we present, we do
not think thatparticipants could have been overly con
-
cerned. People should be very concerned about a 40%
Page 24 of 27
Fansheretal. Cognitive Research: Principles and Implications (2025) 10:7
chance of dining with an individual with COVID or a
hypothetical illness. However, future work could consider
how different types of data presentations may influence
risk perception when the risk is small (such as vaccine
side effects).
In Study 1, we found evidence that although par
-
ticipants in the Static Condition reported an increase
inperceived risk of contracting COVID, their perceived
concern decreased. is suggests that participants
remembered the risk-related information presented
to them in the intervention (i.e., the risk of contracting
COVID at a anksgiving dinner with 10 people was
40%); however, they were less concerned about the risk. It
is unclear why there was a disconnect between their risk
perception and concern. Future research could use addi
-
tional methods—like qualitative free-response analysis-
to explain this phenomenon.
Lastly, participants in the Static Condition in Study
1 demonstrated this backfire effect, but this was not
observed in the second study. ere were many impor
-
tant differences between Studies 1 and 2, namely, Study
1 was conducted in the context of understanding real-
world data, while Study 2 was conducted in the context
of reasoning about hypothetical data. Other notable dif
-
ferences include the fact that that Study 1 presented data
as a bar chart, while Study 2 presented it as an icon array,
and thatthe Study 1 data analysis only included partici
-
pants with low-risk perception (i.e., they planned to dine
with others during the 2020 anksgiving holiday). It is
worth investigating the conditions under which these
backfire effects occur. We would alsolike to that in Fig.6,
there is an outlier data point for the Static Condition
(decrease in concern by 100 points). We still observed the
data backfire effect even after excluding this individual’s
data.
Limitations
While our research showed an advantage for narra-
tive visualizations compared to static visualizations, it is
unclear whether this advantage would also be found in
other scenarios and contexts. e context used here was
one for which risk was (essentially) cumulative. Cumula
-
tive risk can be especially challenging to comprehend, in
which case, a visualization like the narrative visualization
may have been particularly helpful. It is an open research
question as to whether narrative visualizations would
helpillustrate risks of various magnitudes. Narrative vis
-
ualizations could also be explored as a tool tohelp people
understand both absolute and relative risk (such as a dif
-
ference between 2% and 4%, which is a relative difference
of 100%).
One limitation of our work was our method of meas
-
uring trust. Participants were given two Likert-scale
items to measure trust in the visualization, measuring
the perceived intentions of the visualization designer and
the perceived accuracy of the data (I feel like the data
are intended to accurately portray the risks of the new
disease/I feel like the data do accurately portray the risks
of the new disease). While it is not uncommon to use
such Likert-scale items in research on trust in research
(for a review, see Elhamadi etal., 2023), other researchers
argue for more holistic approaches that accurately cap
-
ture the various dimensions of trust. For example, trust
may be assessed with various methods including quan
-
titative surveys from social sciences, trust games from
behavioral economics, measuring belief updating, and
measuring trust through perceptual methods (Elham
-
dadi etal., 2022b). Elhamdadi etal. (2023) presented an
integrated framework for trust in visualization, offering
suggestions on how researchers can best capture the vari
-
ous dimensions of trust. e authors suggest that trust in
visualization is contingent upon cognitive and affective
factors including visualization clarity, usability, accuracy,
aesthetics, and the extent to which the visualization is an
accurate representation of the underlying data (benevo
-
lence/ethics). While our measuresassessed two facets of
trust, future research in this area should use a more com
-
prehensive assessment of trust to better understand how
narrative visualizations impact the different components
of trust. While not ideal, we do find some evidence for
reliability and validity of the measures we present, as the
two items had high reliability (
α
= 0.91), and our measure
of trust related to understanding, visualization presence,
and concern in the directions one would expect.
We found that trust in the visualization, via under
-
standing, mediated the effect of the narrative visuali-
zation on posttest concern. However, we found partial
mediation and not full mediation. is raises the open
question of which other aspects of the narrative visuali
-
zation contributed to its effectiveness. Answering this
question would be useful for establishing design princi
-
ples for narrative visualizations. For example, it could
be useful to run a version of the study where partici
-
pants read about how the 40% risk was calculated,but
they do not view the accompanying visualizations.
It is also possible that time on task influenced subse
-
quent concern and risk perception. e current study
did not collect data on timin;, however, the amount of
time interacting with each intervention may havehad
an influence. Lastly, it is unclear if viewing the narra
-
tive visualization actually improved understanding of
the data-generating process. Future research should
include objective measures of understanding to deter
-
mine whether actual understanding maps onto per-
ceived understanding.
Page 25 of 27
Fansheretal. Cognitive Research: Principles and Implications (2025) 10:7
Conclusion
e presented work investigated the promise of narra-
tive visualizations for communicating data about cumu-
lative risk. e narrative visualizations implemented in
our intervention included icon arrays and accompany
-
ing text describing how one’s risk of contracting a dis-
ease accumulates as the number of people one interacts
with increases. Our results provide promising evidence
that narrative visualizations are an effective tool form
-
itigatingbackfire effects by increasing understanding and
subsequent trust in data. is work suggests that misin
-
formation correction does not need to rely on emotion-
laden content or interventions that affect risk perception
through heuristic thinking; indeed, it appears possible
that narrative visualizations may be an effective way to
improve risk perception using quantitative data.
Supplementary Information
The online version contains supplementary material available at https:// doi.
org/ 10. 1186/ s41235- 025- 00613-w.
Supplementary Material 1
Acknowledgements
The authors would like to thank the National Science Foundation for funding
this research.
Significance Statement
Many people fail to understand that engaging in risky behaviors over time
increases the likelihood of experiencing such risk—termed cumulative risk.
Understanding accumulated risk is crucial for making informed real-life
decisions. For example, in the case of COVID, the likelihood of transmission
from interacting with a single person is small, but interacting with multiple
people increases the likelihood that one of them will have the disease and
transmit it to you. In this study, we examined how to best communicate accu-
mulated risk data with the public, with the goal of increasing concern about
risk as well as understanding and trust in data. We demonstrated that viewing
narrative visualizations—or data visualizations showing how risk accumulates
over time—increased concern about risk by increasing understanding and
trust in the data. This suggests that narrative visualizations should be used
when communicating information about cumulative risk.
Author contributions
M.F. contributed to manuscript preparation and data analysis. L.W. contrib-
uted to experimental design, stimulus creation, data collection, data analysis,
and writing and editing the manuscript. C.H. contributed to experimental
design, data analysis, and writing and editing the manuscript, H.S. and J.K.W.
contributed to experiment design, data analysis, and writing and editing the
manuscript. A.B. and P.S. contributed to experimental design and writing and
editing the manuscript. All authors read and approved the final manuscript.
Funding
This work was supported by the National Science Foundation Award
#2030059. (RAPID: COVID-19 Information Visualizations) awarded to P.S. and
J.K.W.
Availability of data and materials
The datasets supporting the conclusions of this article are available in the OSF
repository, https:// osf. io/ k43ev/? view_ only= 67cc6 401b0 69468 89ac4 99fc1
9afec 2f.
Declarations
Ethics approval and consent to participate
All procedures were determined to be exempt and not regulated by the
University of Michigan Institutional Review Board.
Consent for publication
Consent to publish was obtained from all participants.
Competing interests
The authors declare that they have no competing interests.
Received: 9 May 2024 Accepted: 11 January 2025
References
Adams, W., Armstrong, Z., & Galovich, C. (2015). Can students learn from PhET
sims at home, alone? Physics Education Research Conference, College
Park, MD. https:// www. compa dre. org/ Repos itory/ docum ent/ Serve File.
cfm? ID= 13828 & DocID= 4246
Bach, B., Stefaner, M., Boy, J., Drucker, S., Bartram, L., Wood, J., et al. (2018). Narra-
tive design patterns for data-driven storytelling. In Data-driven storytelling
(pp. 107–133). AK Peters/CRC Press.
Betsch, C., Ulshöfer, C., Renkewitz, F., & Betsch, T. (2011). The influence of narra-
tive v. statistical information on perceiving vaccination risks. Medical Deci-
sion Making, 31(5), 742–753. https:// doi. org/ 10. 1177/ 02729 89X11 400419
Borgo, R., & Edwards, D. J. (2020). The development of visualization psychology
analysis tools to account for trust. arXiv preprint arXiv: 2009. 13200.
Buckley, C., Robles, P., Hernandez, M., & Chien, A. C. (2022). How China Could
Choke Taiwan. The New York Times. https:// www. nytim es. com/ inter
active/ 2022/ 08/ 25/ world/ asia/ china- taiwan- confl ict- block ade. html
Byrd, A., Cai, W., Macdonald, G., Rhyne, E., Throop, N., Ward, J., & White, J. (2022).
The Toss. The New York Times. https:// www. nytim es. com/ inter active/
2022/ 08/ 28/ sports/ tennis/ tennis- serve- ball- toss. html
Cage, F. (2021). Why 5G is slow in the U.S. but will get better eventually. Reu-
ters. https:// graph ics. reute rs. com/ USA- 5G/ jnpwe yldnpw/
Cook, J., Ecker, U., & Lewandowsky, S. (2015). Misinformation and how to cor-
rect it. In R. A. Scott & S. M. Kosslyn (Eds.), Emerging trends in the social and
behavioral sciences. Wiley. https:// doi. org/ 10. 1002/ 97811 18900 772. etrds
0222
Cook, J., & Lewandowsky, S. (2016). Rational irrationality: Modeling climate
change belief polarization using Bayesian networks. Topics in Cognitive
Science, 8(1), 160–179. https:// doi. org/ 10. 1111/ tops. 12186
Cutello, C. A., Hellier, E., Stander, J., & Hanoch, Y. (2020). Evaluating the effective-
ness of a young driver-education intervention: Learn2Live. Transportation
Research Part F: Traffic Psychology and Behaviour, 69, 375–384. https:// doi.
org/ 10. 1016/j. trf. 2020. 02. 009
De La Maza, C., Davis, A., Gonzalez, C., & Azevedo, I. (2019). Understanding
cumulative risk perception from judgments and choices: An application
to flood risks. Risk Analysis, 39(2), 488–504. https:// doi. org/ 10. 1111/ risa.
13206
Doyle, J. K. (1997). Judging cumulative risk 1. Journal of Applied Social Psychol-
ogy, 27(6), 500–524. https:// doi. org/ 10. 1111/j. 1559- 1816. 1997. tb006 44.x
Dutta, P. K., Carvalho, R., Ovaska, M., Tai, C., Tennant, S., Weber, M., & Basile, S.
(2019). Reading the Brexit tea leaves. https:// graph ics. reute rs. com/ BRITA
IN- EU/ 01009 2G834N/ index. html
Elhamdadi, H., Padilla, L., & Xiong, C. (2022a). Using processing fluency as a
metric of trust in scatterplot visualizations. arXiv preprint arXiv: 2209.
14340.
Elhamdadi, H., Gaba, A., Kim, Y. S., & Xiong, C. (2022b). How do we measure
trust in visual data communication?. In 2022 IEEE evaluation and beyond-
methodological approaches for visualization (BELIV) (pp. 85–92). IEEE.
Elhamdadi, H., Stefkovics, A., Beyer, J., Moerth, E., Pfister, H., Bearfield, C. X., &
Nobre, C. (2023). Vistrust: A multidimensional framework and empirical
study of trust in data visualizations. IEEE Transactions on Visualization and
Computer Graphics.
Page 26 of 27
Fansheretal. Cognitive Research: Principles and Implications (2025) 10:7
Fagerlin, A., Wang, C., & Ubel, P. A. (2005). Reducing the influence of anecdotal
reasoning on peoples health care decisions: Is a picture worth a thou-
sand statistics? Medical Decision Making, 25(4), 398–405. https:// doi. org/
10. 1177/ 02729 89X05 278931
Fansher, M., Adkins, T. J., Lalwani, P., Boduroglu, A., Carlson, M., Quirk, M., Lewis,
R. L., Shah, P., Zhang, H., & Jonides, J. (2022b). Icon arrays reduce concern
over COVID-19 vaccine side effects: A randomized control study. Cogni-
tive Research: Principles and Implications, 7(1), 38. https:// doi. org/ 10. 1186/
s41235- 022- 00387-5
Fansher, M., Adkins, T. J., Lewis, R. L., Boduroglu, A., Lalwani, P., Quirk, M., Shah,
P., & Jonides, J. (2022a). How well do ordinary Americans forecast the
growth of COVID-19? Memory & Cognition, 50(7), 1363–1380. https:// doi.
org/ 10. 3758/ s13421- 022- 01288-0
Feltz, A., & Cokely, E. T. (2024). Ethical interaction theory. In Diversity and disa-
greement: From fundamental biases to ethical interactions (pp. 211–246).
Springer Nature Switzerland.
Franconeri, S. L., Padilla, L. M., Shah, P., Zacks, J. M., & Hullman, J. (2021). The
science of visual data communication: What works. Psychological Science
in the Public Interest, 22(3), 110–161. https:// doi. org/ 10. 1177/ 15291 00621
10519 56
Freling, T. H., Yang, Z., Saini, R., Itani, O. S., & Abualsamh, R. R. (2020). When
poignant stories outweigh cold hard facts: A meta-analysis of the anec-
dotal bias. Organizational Behavior and Human Decision Processes, 160,
51–67. https:// doi. org/ 10. 1016/j. obhdp. 2020. 01. 006
Galesic, M., & Garcia-Retamero, R. (2011). Graph literacy: A cross-cultural com-
parison. Medical Decision Making, 31(3), 444–457. https:// doi. org/ 10. 1177/
02729 89x10 373805
Garcia-Retamero, R., & Cokely, E. T. (2014). Using visual aids to help people with
low numeracy make better decisions. In Numerical reasoning in judgments
and decision making about health (pp. 153–174). Cambridge University
Press. https:// doi. org/ 10. 1017/ CBO97 81139 644358. 008
Garcia-Retamero, R., & Cokely, E. T. (2017). Designing visual aids that promote
risk literacy: A systematic review of health research and evidence-based
design heuristics. Human Factors, 59(4), 582–627. https:// doi. org/ 10. 1177/
00187 20817 690634
Garcia-Retamero, R., Okan, Y., & Cokely, E. T. (2012). Using visual aids to improve
communication of risks about health: A review. The Scientific World Jour-
nal, 2012(1), 562637. https:// doi. org/ 10. 1100/ 2012/ 562637
Grice, J. W., Medellin, E., Jones, I., Horvath, S., McDaniel, H., O’lansen, C., & Baker,
M. (2020). Persons as effect sizes. Advances in Methods and Practices in
Psychological Science, 3(4), 443–455. https:// doi. org/ 10. 1177/ 25152 45920
922982
Gustafson, A., & Rice, R. E. (2020). A review of the effects of uncertainty in
public science communication. Public Understanding of Science, 29(6),
614–633. https:// doi. org/ 10. 1177/ 09636 62520 942122
Han, W., & Schulz, H. J. (2020, October). Beyond trust building—Calibrating
trust in visual analytics. In 2020 IEEE workshop on trust and expertise in
visual analytics (TREX) (pp. 9–15). IEEE. https:// doi. org/ 10. 1109/ TREX5 1495.
2020. 00006
Hayes, A. F. (2017). Introduction to mediation, moderation, and conditional pro-
cess analysis: A regression-based approach. Guilford publications.
Hegarty, M. (2004). Dynamic visualizations and learning: Getting to the difficult
questions. Learning and Instruction, 14(3), 343–351. https:// doi. org/ 10.
1016/j. learn instr uc. 2004. 06. 007
Herring, J., VanDyke, M. S., Cummins, R. G., & Melton, F. (2017). Communicat-
ing local climate risks online through an interactive data visualization.
Environmental Communication, 11(1), 90–105. https:// doi. org/ 10. 1080/
17524 032. 2016. 11769 46
Hullman, J. (2019). Why authors don’t visualize uncertainty. IEEE Transactions on
Visualization and Computer Graphics, 26(1), 130–139. https:// doi. org/ 10.
1109/ TVCG. 2019. 29342 87
Hullman, J., & Diakopoulos, N. (2011). Visualization rhetoric: Framing effects
in narrative visualization. IEEE Transactions on Visualization and Computer
Graphics, 17(12), 2231–2240. https:// doi. org/ 10. 1109/ TVCG. 2011. 255
Hullman, J., Drucker, S., Riche, N. H., Lee, B., Fisher, D., & Adar, E. (2013). A deeper
understanding of sequence in narrative visualization. IEEE Transactions on
Visualization and Computer Graphics, 19(12), 2406–2415. https:// doi. org/
10. 1109/ TVCG. 2013. 119
Janssen, E., van Osch, L., de Vries, H., & Lechner, L. (2013). The influence of
narrative risk communication on feelings of cancer risk. British Journal of
Health Psychology, 18(2), 407–419. https:// doi. org/ 10. 1111/j. 2044- 8287.
2012. 02098.x
Kelton, K., Fleischmann, K. R., & Wallace, W. A. (2008). Trust in digital informa-
tion. Journal of the American Society for Information Science and Technol-
ogy, 59(3), 363–374. https:// doi. org/ 10. 1002/ asi. 20722
Kerr, J., van der Bles, A. M., Dryhurst, S., Schneider, C. R., Chopurian, V., Freeman,
A. L., & Van Der Linden, S. (2023). The effects of communicating uncer-
tainty around statistics, on public trust. Royal Society Open Science, 10(11),
230604. https:// doi. org/ 10. 1098/ rsos. 230604
Kim, Y. S., Reinecke, K., & Hullman, J. (2017). Explaining the gap: Visualizing ones
predictions improves recall and comprehension of data. In Proceedings
of the 2017 CHI conference on human factors in computing systems (pp.
1375–1386). https:// doi. org/ 10. 1145/ 30254 53. 30255 92
Koerth, M., & Elena, M. (2020). Why even a small thanksgiving is dangerous.
FiveThirtyEight.
Kong, H. K., Liu, Z., & Karahalios, K. (2019). Trust and recall of information across
varying degrees of title-visualization misalignment. In Proceedings of the
2019 CHI conference on human factors in computing systems (pp. 1–13).
Lammers, J., Crusius, J., & Gast, A. (2020). Correcting misperceptions of
exponential coronavirus growth increases support for social distancing.
Proceedings of the National Academy of Sciences, 117(28), 16264–16266.
https:// doi. org/ 10. 1073/ pnas. 20060 48117
Lee, C., Yang, T., Inchoco, G. D., Jones, G. M., & Satyanarayan, A. (2021). Viral
visualizations: How coronavirus skeptics use orthodox data practices to
promote unorthodox science online. In Proceedings of the 2021 CHI confer-
ence on human factors in computing systems (pp. 1–18). https:// doi. org/ 10.
1145/ 34117 64. 34452 11
Levine, A. J., Hernandez, M., & Spring, J. (2021). Vanishing tropical rainforests.
https:// graph ics. reute rs. com/ GLOBAL- DEFOR ESTAT ION/ RAINF OREST/
klvyk zrbxvg/
Lewandowsky, S., Ecker, U. K., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misin-
formation and its correction: Continued influence and successful debias-
ing. Psychological Science in the Public Interest, 13(3), 106–131. https:// doi.
org/ 10. 1177/ 15291 00612 451018
Lewicki, R. J., Tomlinson, E. C., & Gillespie, N. (2006). Models of interpersonal
trust development: Theoretical approaches, empirical evidence, and
future directions. Journal of Management, 32(6), 991–1022. https:// doi.
org/ 10. 1177/ 01492 06306 294405
Liew, T. W., Tan, S. M., & Seydali, R. (2014). The effects of learners’ differences on
variable manipulation behaviors in simulation-based learning. Journal of
Educational Technology Systems, 43(1), 13–34. https:// doi. org/ 10. 2190/ ET.
43.1.c
Lipkus, I. M., & Hollands, J. G. (1999). The visual communication of risk. JNCI
Monographs, 1999(25), 149–163. https:// doi. org/ 10. 1093/ oxfor djour nals.
jncim onogr aphs. a0241 91
Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude
polarization: The effects of prior theories on subsequently considered evi-
dence. Journal of Personality and Social Psychology, 37(11), 2098. https://
doi. org/ 10. 1037/ 0022- 3514. 37. 11. 2098
Magana, A. J., & Silva Coutinho, G. (2017). Modeling and simulation practices
for a computational thinking-enabled engineering workforce. Computer
Applications in Engineering Education, 25(1), 62–78. https:// doi. org/ 10.
1002/ cae. 21779
Mayer, R. E. (2005). Cognitive theory of multimedia learning. The Cambridge
Handbook of Multimedia Learning, 41, 31–48.
Mayr, E., Hynek, N., Salisu, S., & Windhager, F. (2019). Trust in Information Visuali-
zation. In TrustVis@ EuroVis (pp. 25–29).
McGinnies, E., & Ward, C. D. (1980). Better liked than right: Trustworthiness and
expertise as factors in credibility. Personality and Social Psychology Bulletin,
6(3), 467–472. https:// doi. org/ 10. 1177/ 01461 67280 63023
McNaughton, C. D., Cavanaugh, K. L., Kripalani, S., Rothman, R. L., & Wallston, K.
A. (2015). Validation of a short, 3-item version of the subjective numeracy
scale. Medical Decision Making, 35(8), 932–936. https:// doi. org/ 10. 1177/
02729 89X15 581800
Newell, R., Dale, A., & Winters, C. (2016). A picture is worth a thousand data
points: Exploring visualizations as tools for connecting the public to
climate change research. Cogent Social Sciences, 2(1), 1201885. https:// doi.
org/ 10. 1080/ 23311 886. 2016. 12018 85
Nyhan, B., & Reifler, J. (2015). Displacing misinformation about events: An
experimental test of causal corrections. Journal of Experimental Political
Page 27 of 27
Fansheretal. Cognitive Research: Principles and Implications (2025) 10:7
Science, 2(1), 81–93. https:// doi. org/ 10. 1017/ XPS. 2014. 22[Opens inane
wwind ow]
O’Brien, T. C., Palmer, R., & Albarracin, D. (2021). Misplaced trust: When trust in
science fosters belief in pseudoscience and the benefits of critical evalua-
tion. Journal of Experimental Social Psychology, 96, 104184. https:// doi. org/
10. 1016/j. jesp. 2021. 104184
Okan, Y., Garcia-Retamero, R., Cokely, E. T., & Maldonado, A. (2015). Improving
risk understanding across ability levels: Encouraging active processing
with dynamic icon arrays. Journal of Experimental Psychology: Applied,
21(2), 178. https:// doi. org/ 10. 1037/ xap00 00045
O’neil, C. (2017). Weapons of math destruction: How big data increases inequality
and threatens democracy. Crown.
Padilla, L., Fygenson, R., Castro, S. C., & Bertini, E. (2022b). Multiple forecast visu-
alizations (mfvs): Trade-offs in trust and performance in multiple covid-19
forecast visualizations. IEEE Transactions on Visualization and Computer
Graphics, 29(1), 12–22.
Padilla, L., Kay, M., & Hullman, J. (2022a). Uncertainty visualization. In W. Pie-
gorsch, R. Levine, H. Zhang, & T. Lee (Eds.), Computational statistics in data
science (pp. 405–421). Wiley.
Pandey, S., McKinley, O. G., Crouser, R. J., & Ottley, A. (2023). Do you trust what
you see? Toward a multidimensional measure of trust in visualization.
In 2023 IEEE visualization and visual analytics (VIS) (pp. 26–30). IEEE.
Park, S., & Gil-Garcia, J. R. (2022). Open data innovation: Visualizations and
process redesign as a way to bridge the transparency-accountability
gap. Government Information Quarterly, 39(1), 101456. https:// doi. org/ 10.
1016/j. giq. 2020. 101456
Peck, E. M., Ayuso, S. E., & El-Etr, O. (2019). Data is personal: Attitudes and per-
ceptions of data visualization in rural pennsylvania. In Proceedings of the
2019 CHI conference on human factors in computing systems (pp. 1–12).
Petrova, D., Garcia-Retamero, R., & Cokely, E. T. (2015). Understanding the harms
and benefits of cancer screening: A model of factors that shape informed
decision making. Medical Decision Making, 35(7), 847–858. https:// doi. org/
10. 1177/ 02729 89x15 587676
Rakow, T., Heard, C. L., & Newell, B. R. (2015). Meeting three challenges in risk
communication: Phenomena, numbers, and emotions. Policy Insights
from the Behavioral and Brain Sciences, 2(1), 147–156. https:// doi. org/ 10.
1177/ 23727 32215 601442
Reinholtz, N., Maglio, S. J., & Spiller, S. A. (2021). Stocks, flows, and risk response
to pandemic data. Journal of Experimental Psychology: Applied, 27(4), 657.
https:// doi. org/ 10. 1037/ xap00 00395
Rhodes, R. E., Rodriguez, F., & Shah, P. (2014). Explaining the alluring influence
of neuroscience information on scientific reasoning. Journal of Experimen-
tal Psychology: Learning, Memory, and Cognition, 40(5), 1432. https:// doi.
org/ 10. 1037/ a0036 844
Rodriguez, F., Rhodes, R. E., Miller, K. F., & Shah, P. (2016). Examining the influ-
ence of anecdotal stories and the interplay of individual differences on
reasoning. Thinking & Reasoning, 22(3), 274–296. https:// doi. org/ 10. 1080/
13546 783. 2016. 11395 06
Roozenbeek, J., Schneider, C. R., Dryhurst, S., Kerr, J., Freeman, A. L., Recchia, G.,
Van Der Bles, A. M., & Van Der Linden, S. (2020). Susceptibility to misin-
formation about COVID-19 around the world. Royal Society Open Science,
7(10), 201199.
Rutjens, B. T., Heine, S. J., Sutton, R. M., & van Harreveld, F. (2018). Attitudes
towards science. In Advances in experimental social psychology (Vol. 57, pp.
125–165). Academic Press.
Segel, E., & Heer, J. (2010). Narrative visualization: Telling stories with data. IEEE
Transactions on Visualization and Computer Graphics, 16(6), 1139–1148.
https:// doi. org/ 10. 1109/ TVCG. 2010. 179
Seifert, C. M. (2002). The continued influence of misinformation in memory:
What makes a correction effective?. In Psychology of learning and motiva-
tion (Vol. 41, pp. 265–292). Academic Press. https:// doi. org/ 10. 1016/
S0079- 7421(02) 80009-3
Shah, P., Michal, A., Ibrahim, A., Rhodes, R., & Rodriguez, F. (2017). What makes
everyday scientific reasoning so challenging?. In Psychology of learning
and motivation (Vol. 66, pp. 251–299). Academic Press. https:// doi. org/ 10.
1016/ bs. plm. 2016. 11. 006
Sloman, S. A. (1994). When explanations compete: The role of explanatory
coherence on judgements of likelihood. Cognition, 52(1), 1–21. https://
doi. org/ 10. 1016/ 0010- 0277(94) 90002-7
Slovic, P. (2000). What does it mean to know a cumulative risk? Adolescents’
perceptions of short-term and long-term consequences of smoking.
Journal of Behavioral Decision Making, 13(2), 259–266. https:// doi. org/
10. 1002/ (SICI) 1099- 0771(200004/ 06) 13:2% 3c259:: AID- BDM336% 3e3.0.
CO;2-6
Slovic, P., Fischhoff, B., & Lichtenstein, S. (1978). Accident probabilities and seat
belt usage: A psychological perspective. Accident Analysis & Prevention,
10(4), 281–285. https:// doi. org/ 10. 1016/ 0001- 4575(78) 90030-1
Slovic, P., Väställ, D., Erlandsson, A., & Gregory, R. (2017). Iconic photographs
and the ebb and flow of empathic response to humanitarian disasters.
Proceedings of the National Academy of Sciences, 114(4), 640–644. https://
doi. org/ 10. 1073/ pnas. 16139 77114
Sterman, J. D. (2011). Communicating climate change risks in a skepti-
cal world. Climatic Change, 108, 811–826. https:// doi. org/ 10. 1007/
s10584- 011- 0189-3
Swire-Thompson, B., DeGutis, J., & Lazer, D. (2020). Searching for the backfire
effect: Measurement and design considerations. Journal of Applied
Research in Memory and Cognition, 9(3), 286–299. https:// doi. org/ 10.
1016/j. jarmac. 2020. 06. 006
Thagard, P. (2007). Coherence, truth, and the development of scientific knowl-
edge. Philosophy of Science, 74(1), 28–47. https:// doi. org/ 10. 1086/ 520941
Tiede, K. E., Bjälkebring, P., & Peters, E. (2022). Numeracy, numeric attention,
and number use in judgment and choice. Journal of Behavioral Decision
Making, 35(3), e2264. https:// doi. org/ 10. 1002/ bdm. 2264
Van Der Bles, A. M., van der Linden, S., Freeman, A. L., & Spiegelhalter, D. J.
(2020). The effects of communicating uncertainty on public trust in facts
and numbers. Proceedings of the National Academy of Sciences, 117(14),
7672–7683.
Van der Pligt, J. (1996). Risk perception and self-protective behavior. European
Psychologist, 1(1), 34–43. https:// doi. org/ 10. 1027/ 1016- 9040.1. 1. 34
Wang, H. C., & Doong, H. S. (2014). Effects of online advertising strategy on
attitude towards healthcare service. In 2014 47th Hawaii international
conference on system sciences (pp. 2725–2732). IEEE.
Witt, J., Hao, C., & Shah, P. (2022). The impact of visualizing the process of
disease spread on social distancing intentions and attitudes. In Proceed-
ings of the human factors and ergonomics society annual meeting (Vol. 66,
No. 1, pp. 2026–2030). SAGE Publications. https:// doi. org/ 10. 1177/ 10711
81322 661172
Yang, F., Cai, M., Mortenson, C., Fakhari, H., Lokmanoglu, A. D., Hullman, J.,
Franconeri, S., Diakopoulos, N., Nisbet, E.C., & Kay, M. (2023). Swaying the
public? Impacts of election forecast visualizations on emotion, trust, and
intention in the 2022 US midterms. IEEE Transactions on Visualization and
Computer Graphics.
Zebregs, S., van den Putte, B., Neijens, P., & de Graaf, A. (2015). The differential
impact of statistical and narrative evidence on beliefs, attitude, and inten-
tion: A meta-analysis. Health Communication, 30(3), 282–289. https:// doi.
org/ 10. 1080/ 10410 236. 2013. 842528
Zhao, L. (2017). Interface factors that affect users’ trust towards information
visualization (Masters thesis, Purdue University).
Zipkin, D. A., Umscheid, C. A., Keating, N. L., Allen, E., Aung, K., Beyth, R.,
Kaatz, S., Mann, D. M., Sussman, J. B., Korenstein, D., & Schardt, C. (2014).
Evidence-based risk communication: A systematic review. Annals of
Internal Medicine, 161(4), 270–280.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in pub-
lished maps and institutional affiliations.